00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1042 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3704 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.119 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.142 Using shallow fetch with depth 1 00:00:00.142 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.142 > git --version # timeout=10 00:00:00.166 > git --version # 'git version 2.39.2' 00:00:00.166 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.090 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.102 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.116 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.116 > git config core.sparsecheckout # timeout=10 00:00:08.128 > git read-tree -mu HEAD # timeout=10 00:00:08.144 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.168 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.168 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.312 [Pipeline] Start of Pipeline 00:00:08.325 [Pipeline] library 00:00:08.327 Loading library shm_lib@master 00:00:08.327 Library shm_lib@master is cached. Copying from home. 00:00:08.341 [Pipeline] node 00:00:08.360 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.361 [Pipeline] { 00:00:08.367 [Pipeline] catchError 00:00:08.368 [Pipeline] { 00:00:08.380 [Pipeline] wrap 00:00:08.390 [Pipeline] { 00:00:08.396 [Pipeline] stage 00:00:08.397 [Pipeline] { (Prologue) 00:00:08.622 [Pipeline] sh 00:00:09.360 + logger -p user.info -t JENKINS-CI 00:00:09.395 [Pipeline] echo 00:00:09.397 Node: GP11 00:00:09.404 [Pipeline] sh 00:00:09.733 [Pipeline] setCustomBuildProperty 00:00:09.745 [Pipeline] echo 00:00:09.747 Cleanup processes 00:00:09.752 [Pipeline] sh 00:00:10.039 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.039 15561 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.056 [Pipeline] sh 00:00:10.353 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.353 ++ grep -v 'sudo pgrep' 00:00:10.353 ++ awk '{print $1}' 00:00:10.353 + sudo kill -9 00:00:10.353 + true 00:00:10.367 [Pipeline] cleanWs 00:00:10.379 [WS-CLEANUP] Deleting project workspace... 00:00:10.379 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.391 [WS-CLEANUP] done 00:00:10.395 [Pipeline] setCustomBuildProperty 00:00:10.411 [Pipeline] sh 00:00:10.703 + sudo git config --global --replace-all safe.directory '*' 00:00:10.794 [Pipeline] httpRequest 00:00:13.652 [Pipeline] echo 00:00:13.654 Sorcerer 10.211.164.101 is alive 00:00:13.665 [Pipeline] retry 00:00:13.667 [Pipeline] { 00:00:13.683 [Pipeline] httpRequest 00:00:13.689 HttpMethod: GET 00:00:13.690 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.691 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.717 Response Code: HTTP/1.1 200 OK 00:00:13.718 Success: Status code 200 is in the accepted range: 200,404 00:00:13.718 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.954 [Pipeline] } 00:00:20.970 [Pipeline] // retry 00:00:20.976 [Pipeline] sh 00:00:21.272 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.292 [Pipeline] httpRequest 00:00:21.729 [Pipeline] echo 00:00:21.731 Sorcerer 10.211.164.101 is alive 00:00:21.740 [Pipeline] retry 00:00:21.741 [Pipeline] { 00:00:21.755 [Pipeline] httpRequest 00:00:21.760 HttpMethod: GET 00:00:21.761 URL: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:21.762 Sending request to url: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:21.784 Response Code: HTTP/1.1 200 OK 00:00:21.784 Success: Status code 200 is in the accepted range: 200,404 00:00:21.785 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:38.710 [Pipeline] } 00:01:38.729 [Pipeline] // retry 00:01:38.737 [Pipeline] sh 00:01:39.039 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:41.610 [Pipeline] sh 00:01:41.905 + git -C spdk log --oneline -n5 00:01:41.905 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:41.905 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:41.905 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:41.905 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:41.905 60adca7e1 lib/mlx5: API to configure UMR 00:01:41.927 [Pipeline] withCredentials 00:01:41.939 > git --version # timeout=10 00:01:41.952 > git --version # 'git version 2.39.2' 00:01:41.975 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:41.977 [Pipeline] { 00:01:41.986 [Pipeline] retry 00:01:41.989 [Pipeline] { 00:01:42.005 [Pipeline] sh 00:01:42.557 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:42.570 [Pipeline] } 00:01:42.590 [Pipeline] // retry 00:01:42.595 [Pipeline] } 00:01:42.612 [Pipeline] // withCredentials 00:01:42.622 [Pipeline] httpRequest 00:01:43.387 [Pipeline] echo 00:01:43.389 Sorcerer 10.211.164.101 is alive 00:01:43.399 [Pipeline] retry 00:01:43.401 [Pipeline] { 00:01:43.415 [Pipeline] httpRequest 00:01:43.421 HttpMethod: GET 00:01:43.421 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:43.423 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:43.448 Response Code: HTTP/1.1 200 OK 00:01:43.448 Success: Status code 200 is in the accepted range: 200,404 00:01:43.449 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.133 [Pipeline] } 00:02:02.150 [Pipeline] // retry 00:02:02.159 [Pipeline] sh 00:02:02.451 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:03.843 [Pipeline] sh 00:02:04.126 + git -C dpdk log --oneline -n5 00:02:04.126 eeb0605f11 version: 23.11.0 00:02:04.126 238778122a doc: update release notes for 23.11 00:02:04.126 46aa6b3cfc doc: fix description of RSS features 00:02:04.126 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:04.126 7e421ae345 devtools: support skipping forbid rule check 00:02:04.135 [Pipeline] } 00:02:04.153 [Pipeline] // stage 00:02:04.162 [Pipeline] stage 00:02:04.164 [Pipeline] { (Prepare) 00:02:04.184 [Pipeline] writeFile 00:02:04.200 [Pipeline] sh 00:02:04.491 + logger -p user.info -t JENKINS-CI 00:02:04.505 [Pipeline] sh 00:02:04.787 + logger -p user.info -t JENKINS-CI 00:02:04.798 [Pipeline] sh 00:02:05.080 + cat autorun-spdk.conf 00:02:05.080 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.080 SPDK_TEST_NVMF=1 00:02:05.080 SPDK_TEST_NVME_CLI=1 00:02:05.080 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.080 SPDK_TEST_NVMF_NICS=e810 00:02:05.080 SPDK_TEST_VFIOUSER=1 00:02:05.080 SPDK_RUN_UBSAN=1 00:02:05.080 NET_TYPE=phy 00:02:05.080 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.080 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.089 RUN_NIGHTLY=1 00:02:05.093 [Pipeline] readFile 00:02:05.130 [Pipeline] withEnv 00:02:05.131 [Pipeline] { 00:02:05.143 [Pipeline] sh 00:02:05.480 + set -ex 00:02:05.480 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.480 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.480 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.480 ++ SPDK_TEST_NVMF=1 00:02:05.480 ++ SPDK_TEST_NVME_CLI=1 00:02:05.480 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.480 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.480 ++ SPDK_TEST_VFIOUSER=1 00:02:05.480 ++ SPDK_RUN_UBSAN=1 00:02:05.480 ++ NET_TYPE=phy 00:02:05.480 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.480 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.480 ++ RUN_NIGHTLY=1 00:02:05.480 + case $SPDK_TEST_NVMF_NICS in 00:02:05.480 + DRIVERS=ice 00:02:05.480 + [[ tcp == \r\d\m\a ]] 00:02:05.480 + [[ -n ice ]] 00:02:05.480 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.480 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:08.784 rmmod: ERROR: Module irdma is not currently loaded 00:02:08.784 rmmod: ERROR: Module i40iw is not currently loaded 00:02:08.784 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:08.784 + true 00:02:08.784 + for D in $DRIVERS 00:02:08.784 + sudo modprobe ice 00:02:08.784 + exit 0 00:02:08.794 [Pipeline] } 00:02:08.807 [Pipeline] // withEnv 00:02:08.811 [Pipeline] } 00:02:08.824 [Pipeline] // stage 00:02:08.833 [Pipeline] catchError 00:02:08.834 [Pipeline] { 00:02:08.845 [Pipeline] timeout 00:02:08.845 Timeout set to expire in 1 hr 0 min 00:02:08.847 [Pipeline] { 00:02:08.860 [Pipeline] stage 00:02:08.862 [Pipeline] { (Tests) 00:02:08.875 [Pipeline] sh 00:02:09.169 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.169 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.169 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.169 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:09.169 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.169 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.169 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:09.169 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.169 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.169 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.169 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:09.169 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.169 + source /etc/os-release 00:02:09.169 ++ NAME='Fedora Linux' 00:02:09.169 ++ VERSION='39 (Cloud Edition)' 00:02:09.169 ++ ID=fedora 00:02:09.169 ++ VERSION_ID=39 00:02:09.169 ++ VERSION_CODENAME= 00:02:09.169 ++ PLATFORM_ID=platform:f39 00:02:09.169 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.169 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.169 ++ LOGO=fedora-logo-icon 00:02:09.169 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.169 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.169 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.169 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.169 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.169 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.169 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.169 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.169 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.169 ++ SUPPORT_END=2024-11-12 00:02:09.169 ++ VARIANT='Cloud Edition' 00:02:09.169 ++ VARIANT_ID=cloud 00:02:09.169 + uname -a 00:02:09.169 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.169 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.110 Hugepages 00:02:10.110 node hugesize free / total 00:02:10.110 node0 1048576kB 0 / 0 00:02:10.110 node0 2048kB 0 / 0 00:02:10.110 node1 1048576kB 0 / 0 00:02:10.110 node1 2048kB 0 / 0 00:02:10.110 00:02:10.110 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.370 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:10.370 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:10.370 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:10.370 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:10.370 + rm -f /tmp/spdk-ld-path 00:02:10.370 + source autorun-spdk.conf 00:02:10.370 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.370 ++ SPDK_TEST_NVMF=1 00:02:10.370 ++ SPDK_TEST_NVME_CLI=1 00:02:10.370 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.370 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.370 ++ SPDK_TEST_VFIOUSER=1 00:02:10.370 ++ SPDK_RUN_UBSAN=1 00:02:10.370 ++ NET_TYPE=phy 00:02:10.370 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.370 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.370 ++ RUN_NIGHTLY=1 00:02:10.370 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.370 + [[ -n '' ]] 00:02:10.370 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.370 + for M in /var/spdk/build-*-manifest.txt 00:02:10.370 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.370 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.370 + for M in /var/spdk/build-*-manifest.txt 00:02:10.370 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.370 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.370 + for M in /var/spdk/build-*-manifest.txt 00:02:10.370 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.370 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.370 ++ uname 00:02:10.370 + [[ Linux == \L\i\n\u\x ]] 00:02:10.370 + sudo dmesg -T 00:02:10.370 + sudo dmesg --clear 00:02:10.370 + dmesg_pid=16263 00:02:10.370 + [[ Fedora Linux == FreeBSD ]] 00:02:10.370 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.370 + sudo dmesg -Tw 00:02:10.370 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.370 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.370 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.370 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.370 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.370 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.370 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.370 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.370 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.370 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.370 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.370 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.370 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.370 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.370 00:29:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:10.370 00:29:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.370 00:29:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:10.370 00:29:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.370 00:29:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.630 00:29:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:10.630 00:29:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.630 00:29:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.630 00:29:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.630 00:29:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.630 00:29:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.630 00:29:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.630 00:29:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.630 00:29:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.630 00:29:26 -- paths/export.sh@5 -- $ export PATH 00:02:10.630 00:29:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.630 00:29:26 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.630 00:29:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:10.630 00:29:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733527766.XXXXXX 00:02:10.630 00:29:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733527766.FKuHn9 00:02:10.630 00:29:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:10.630 00:29:26 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:10.630 00:29:26 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.630 00:29:26 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:10.630 00:29:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.630 00:29:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.630 00:29:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:10.630 00:29:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:10.630 00:29:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.630 00:29:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:10.630 00:29:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:10.630 00:29:26 -- pm/common@17 -- $ local monitor 00:02:10.630 00:29:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.630 00:29:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.630 00:29:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.630 00:29:26 -- pm/common@21 -- $ date +%s 00:02:10.630 00:29:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.630 00:29:26 -- pm/common@21 -- $ date +%s 00:02:10.630 00:29:26 -- pm/common@25 -- $ sleep 1 00:02:10.630 00:29:26 -- pm/common@21 -- $ date +%s 00:02:10.630 00:29:26 -- pm/common@21 -- $ date +%s 00:02:10.630 00:29:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733527766 00:02:10.630 00:29:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733527766 00:02:10.630 00:29:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733527766 00:02:10.630 00:29:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733527766 00:02:10.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733527766_collect-cpu-load.pm.log 00:02:10.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733527766_collect-vmstat.pm.log 00:02:10.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733527766_collect-cpu-temp.pm.log 00:02:10.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733527766_collect-bmc-pm.bmc.pm.log 00:02:11.576 00:29:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:11.576 00:29:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.576 00:29:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.576 00:29:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.576 00:29:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.576 Fri Dec 6 11:29:27 PM UTC 2024 00:02:11.576 00:29:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.576 v25.01-pre-311-ga2f5e1c2d 00:02:11.576 00:29:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.576 00:29:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.576 00:29:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.576 00:29:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.576 00:29:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.576 00:29:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.576 ************************************ 00:02:11.576 START TEST ubsan 00:02:11.576 ************************************ 00:02:11.576 00:29:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:11.576 using ubsan 00:02:11.576 00:02:11.576 real 0m0.000s 00:02:11.576 user 0m0.000s 00:02:11.576 sys 0m0.000s 00:02:11.576 00:29:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.576 00:29:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.576 ************************************ 00:02:11.576 END TEST ubsan 00:02:11.576 ************************************ 00:02:11.576 00:29:27 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:11.576 00:29:27 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:11.576 00:29:27 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:11.576 00:29:27 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:11.576 00:29:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.576 00:29:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.576 ************************************ 00:02:11.576 START TEST build_native_dpdk 00:02:11.576 ************************************ 00:02:11.576 00:29:27 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:11.576 eeb0605f11 version: 23.11.0 00:02:11.576 238778122a doc: update release notes for 23.11 00:02:11.576 46aa6b3cfc doc: fix description of RSS features 00:02:11.576 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:11.576 7e421ae345 devtools: support skipping forbid rule check 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:11.576 00:29:27 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:11.576 00:29:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:11.576 00:29:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.577 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:11.837 patching file config/rte_config.h 00:02:11.837 Hunk #1 succeeded at 60 (offset 1 line). 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:11.837 patching file lib/pcapng/rte_pcapng.c 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.837 00:29:27 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:11.837 00:29:27 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:18.409 The Meson build system 00:02:18.409 Version: 1.5.0 00:02:18.409 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:18.409 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:18.409 Build type: native build 00:02:18.409 Program cat found: YES (/usr/bin/cat) 00:02:18.409 Project name: DPDK 00:02:18.409 Project version: 23.11.0 00:02:18.409 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.409 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:18.409 Host machine cpu family: x86_64 00:02:18.409 Host machine cpu: x86_64 00:02:18.409 Message: ## Building in Developer Mode ## 00:02:18.409 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.409 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:18.409 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.409 Program python3 found: YES (/usr/bin/python3) 00:02:18.409 Program cat found: YES (/usr/bin/cat) 00:02:18.409 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:18.409 Compiler for C supports arguments -march=native: YES 00:02:18.409 Checking for size of "void *" : 8 00:02:18.409 Checking for size of "void *" : 8 (cached) 00:02:18.409 Library m found: YES 00:02:18.409 Library numa found: YES 00:02:18.409 Has header "numaif.h" : YES 00:02:18.409 Library fdt found: NO 00:02:18.409 Library execinfo found: NO 00:02:18.409 Has header "execinfo.h" : YES 00:02:18.409 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.409 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.409 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.409 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.409 Run-time dependency openssl found: YES 3.1.1 00:02:18.409 Run-time dependency libpcap found: YES 1.10.4 00:02:18.409 Has header "pcap.h" with dependency libpcap: YES 00:02:18.409 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.409 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.409 Compiler for C supports arguments -Wformat: YES 00:02:18.409 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.409 Compiler for C supports arguments -Wformat-security: NO 00:02:18.409 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.409 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.409 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.409 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.409 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.409 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.409 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.409 Compiler for C supports arguments -Wundef: YES 00:02:18.409 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.409 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.409 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.409 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.409 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.409 Program objdump found: YES (/usr/bin/objdump) 00:02:18.409 Compiler for C supports arguments -mavx512f: YES 00:02:18.409 Checking if "AVX512 checking" compiles: YES 00:02:18.409 Fetching value of define "__SSE4_2__" : 1 00:02:18.409 Fetching value of define "__AES__" : 1 00:02:18.409 Fetching value of define "__AVX__" : 1 00:02:18.409 Fetching value of define "__AVX2__" : (undefined) 00:02:18.409 Fetching value of define "__AVX512BW__" : (undefined) 00:02:18.409 Fetching value of define "__AVX512CD__" : (undefined) 00:02:18.409 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:18.409 Fetching value of define "__AVX512F__" : (undefined) 00:02:18.409 Fetching value of define "__AVX512VL__" : (undefined) 00:02:18.409 Fetching value of define "__PCLMUL__" : 1 00:02:18.409 Fetching value of define "__RDRND__" : 1 00:02:18.409 Fetching value of define "__RDSEED__" : (undefined) 00:02:18.409 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:18.409 Fetching value of define "__znver1__" : (undefined) 00:02:18.409 Fetching value of define "__znver2__" : (undefined) 00:02:18.409 Fetching value of define "__znver3__" : (undefined) 00:02:18.409 Fetching value of define "__znver4__" : (undefined) 00:02:18.409 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.409 Message: lib/log: Defining dependency "log" 00:02:18.409 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.409 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.409 Checking for function "getentropy" : NO 00:02:18.409 Message: lib/eal: Defining dependency "eal" 00:02:18.409 Message: lib/ring: Defining dependency "ring" 00:02:18.409 Message: lib/rcu: Defining dependency "rcu" 00:02:18.409 Message: lib/mempool: Defining dependency "mempool" 00:02:18.409 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.409 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.409 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.409 Compiler for C supports arguments -mpclmul: YES 00:02:18.409 Compiler for C supports arguments -maes: YES 00:02:18.409 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.409 Compiler for C supports arguments -mavx512bw: YES 00:02:18.409 Compiler for C supports arguments -mavx512dq: YES 00:02:18.409 Compiler for C supports arguments -mavx512vl: YES 00:02:18.409 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.409 Compiler for C supports arguments -mavx2: YES 00:02:18.409 Compiler for C supports arguments -mavx: YES 00:02:18.409 Message: lib/net: Defining dependency "net" 00:02:18.409 Message: lib/meter: Defining dependency "meter" 00:02:18.409 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.409 Message: lib/pci: Defining dependency "pci" 00:02:18.409 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.409 Message: lib/metrics: Defining dependency "metrics" 00:02:18.409 Message: lib/hash: Defining dependency "hash" 00:02:18.409 Message: lib/timer: Defining dependency "timer" 00:02:18.409 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.409 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:18.409 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:18.409 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:18.409 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:18.409 Message: lib/acl: Defining dependency "acl" 00:02:18.409 Message: lib/bbdev: Defining dependency "bbdev" 00:02:18.409 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:18.409 Run-time dependency libelf found: YES 0.191 00:02:18.409 Message: lib/bpf: Defining dependency "bpf" 00:02:18.409 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:18.409 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.409 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.409 Message: lib/distributor: Defining dependency "distributor" 00:02:18.409 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.409 Message: lib/efd: Defining dependency "efd" 00:02:18.409 Message: lib/eventdev: Defining dependency "eventdev" 00:02:18.409 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:18.409 Message: lib/gpudev: Defining dependency "gpudev" 00:02:18.409 Message: lib/gro: Defining dependency "gro" 00:02:18.409 Message: lib/gso: Defining dependency "gso" 00:02:18.409 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:18.409 Message: lib/jobstats: Defining dependency "jobstats" 00:02:18.409 Message: lib/latencystats: Defining dependency "latencystats" 00:02:18.409 Message: lib/lpm: Defining dependency "lpm" 00:02:18.409 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.410 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.410 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:18.410 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:18.410 Message: lib/member: Defining dependency "member" 00:02:18.410 Message: lib/pcapng: Defining dependency "pcapng" 00:02:18.410 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.410 Message: lib/power: Defining dependency "power" 00:02:18.410 Message: lib/rawdev: Defining dependency "rawdev" 00:02:18.410 Message: lib/regexdev: Defining dependency "regexdev" 00:02:18.410 Message: lib/mldev: Defining dependency "mldev" 00:02:18.410 Message: lib/rib: Defining dependency "rib" 00:02:18.410 Message: lib/reorder: Defining dependency "reorder" 00:02:18.410 Message: lib/sched: Defining dependency "sched" 00:02:18.410 Message: lib/security: Defining dependency "security" 00:02:18.410 Message: lib/stack: Defining dependency "stack" 00:02:18.410 Has header "linux/userfaultfd.h" : YES 00:02:18.410 Has header "linux/vduse.h" : YES 00:02:18.410 Message: lib/vhost: Defining dependency "vhost" 00:02:18.410 Message: lib/ipsec: Defining dependency "ipsec" 00:02:18.410 Message: lib/pdcp: Defining dependency "pdcp" 00:02:18.410 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.410 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.410 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:18.410 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:18.410 Message: lib/fib: Defining dependency "fib" 00:02:18.410 Message: lib/port: Defining dependency "port" 00:02:18.410 Message: lib/pdump: Defining dependency "pdump" 00:02:18.410 Message: lib/table: Defining dependency "table" 00:02:18.410 Message: lib/pipeline: Defining dependency "pipeline" 00:02:18.410 Message: lib/graph: Defining dependency "graph" 00:02:18.410 Message: lib/node: Defining dependency "node" 00:02:19.791 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.791 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.791 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.791 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.791 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:19.791 Compiler for C supports arguments -Wno-unused-value: YES 00:02:19.791 Compiler for C supports arguments -Wno-format: YES 00:02:19.791 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.791 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:19.791 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.791 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.791 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.791 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.791 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.791 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.791 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.791 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.791 Has header "sys/epoll.h" : YES 00:02:19.791 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.791 Configuring doxy-api-html.conf using configuration 00:02:19.791 Configuring doxy-api-man.conf using configuration 00:02:19.791 Program mandb found: YES (/usr/bin/mandb) 00:02:19.791 Program sphinx-build found: NO 00:02:19.791 Configuring rte_build_config.h using configuration 00:02:19.791 Message: 00:02:19.791 ================= 00:02:19.791 Applications Enabled 00:02:19.791 ================= 00:02:19.791 00:02:19.791 apps: 00:02:19.791 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:19.791 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:19.791 test-pmd, test-regex, test-sad, test-security-perf, 00:02:19.791 00:02:19.791 Message: 00:02:19.791 ================= 00:02:19.791 Libraries Enabled 00:02:19.791 ================= 00:02:19.791 00:02:19.791 libs: 00:02:19.791 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.791 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:19.791 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:19.791 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:19.791 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:19.791 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:19.791 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:19.791 00:02:19.791 00:02:19.791 Message: 00:02:19.791 =============== 00:02:19.791 Drivers Enabled 00:02:19.791 =============== 00:02:19.791 00:02:19.791 common: 00:02:19.791 00:02:19.791 bus: 00:02:19.791 pci, vdev, 00:02:19.791 mempool: 00:02:19.791 ring, 00:02:19.791 dma: 00:02:19.791 00:02:19.791 net: 00:02:19.791 i40e, 00:02:19.791 raw: 00:02:19.791 00:02:19.791 crypto: 00:02:19.791 00:02:19.791 compress: 00:02:19.791 00:02:19.791 regex: 00:02:19.791 00:02:19.791 ml: 00:02:19.791 00:02:19.791 vdpa: 00:02:19.791 00:02:19.791 event: 00:02:19.791 00:02:19.791 baseband: 00:02:19.791 00:02:19.791 gpu: 00:02:19.791 00:02:19.791 00:02:19.791 Message: 00:02:19.791 ================= 00:02:19.791 Content Skipped 00:02:19.791 ================= 00:02:19.791 00:02:19.791 apps: 00:02:19.791 00:02:19.791 libs: 00:02:19.791 00:02:19.791 drivers: 00:02:19.791 common/cpt: not in enabled drivers build config 00:02:19.791 common/dpaax: not in enabled drivers build config 00:02:19.791 common/iavf: not in enabled drivers build config 00:02:19.791 common/idpf: not in enabled drivers build config 00:02:19.791 common/mvep: not in enabled drivers build config 00:02:19.791 common/octeontx: not in enabled drivers build config 00:02:19.791 bus/auxiliary: not in enabled drivers build config 00:02:19.791 bus/cdx: not in enabled drivers build config 00:02:19.791 bus/dpaa: not in enabled drivers build config 00:02:19.791 bus/fslmc: not in enabled drivers build config 00:02:19.791 bus/ifpga: not in enabled drivers build config 00:02:19.791 bus/platform: not in enabled drivers build config 00:02:19.791 bus/vmbus: not in enabled drivers build config 00:02:19.791 common/cnxk: not in enabled drivers build config 00:02:19.791 common/mlx5: not in enabled drivers build config 00:02:19.791 common/nfp: not in enabled drivers build config 00:02:19.791 common/qat: not in enabled drivers build config 00:02:19.791 common/sfc_efx: not in enabled drivers build config 00:02:19.791 mempool/bucket: not in enabled drivers build config 00:02:19.791 mempool/cnxk: not in enabled drivers build config 00:02:19.791 mempool/dpaa: not in enabled drivers build config 00:02:19.791 mempool/dpaa2: not in enabled drivers build config 00:02:19.791 mempool/octeontx: not in enabled drivers build config 00:02:19.791 mempool/stack: not in enabled drivers build config 00:02:19.791 dma/cnxk: not in enabled drivers build config 00:02:19.791 dma/dpaa: not in enabled drivers build config 00:02:19.791 dma/dpaa2: not in enabled drivers build config 00:02:19.791 dma/hisilicon: not in enabled drivers build config 00:02:19.791 dma/idxd: not in enabled drivers build config 00:02:19.791 dma/ioat: not in enabled drivers build config 00:02:19.791 dma/skeleton: not in enabled drivers build config 00:02:19.791 net/af_packet: not in enabled drivers build config 00:02:19.791 net/af_xdp: not in enabled drivers build config 00:02:19.791 net/ark: not in enabled drivers build config 00:02:19.791 net/atlantic: not in enabled drivers build config 00:02:19.791 net/avp: not in enabled drivers build config 00:02:19.791 net/axgbe: not in enabled drivers build config 00:02:19.791 net/bnx2x: not in enabled drivers build config 00:02:19.791 net/bnxt: not in enabled drivers build config 00:02:19.791 net/bonding: not in enabled drivers build config 00:02:19.791 net/cnxk: not in enabled drivers build config 00:02:19.791 net/cpfl: not in enabled drivers build config 00:02:19.791 net/cxgbe: not in enabled drivers build config 00:02:19.791 net/dpaa: not in enabled drivers build config 00:02:19.791 net/dpaa2: not in enabled drivers build config 00:02:19.791 net/e1000: not in enabled drivers build config 00:02:19.791 net/ena: not in enabled drivers build config 00:02:19.791 net/enetc: not in enabled drivers build config 00:02:19.791 net/enetfec: not in enabled drivers build config 00:02:19.791 net/enic: not in enabled drivers build config 00:02:19.791 net/failsafe: not in enabled drivers build config 00:02:19.791 net/fm10k: not in enabled drivers build config 00:02:19.791 net/gve: not in enabled drivers build config 00:02:19.791 net/hinic: not in enabled drivers build config 00:02:19.791 net/hns3: not in enabled drivers build config 00:02:19.791 net/iavf: not in enabled drivers build config 00:02:19.791 net/ice: not in enabled drivers build config 00:02:19.791 net/idpf: not in enabled drivers build config 00:02:19.791 net/igc: not in enabled drivers build config 00:02:19.791 net/ionic: not in enabled drivers build config 00:02:19.791 net/ipn3ke: not in enabled drivers build config 00:02:19.791 net/ixgbe: not in enabled drivers build config 00:02:19.791 net/mana: not in enabled drivers build config 00:02:19.791 net/memif: not in enabled drivers build config 00:02:19.791 net/mlx4: not in enabled drivers build config 00:02:19.791 net/mlx5: not in enabled drivers build config 00:02:19.791 net/mvneta: not in enabled drivers build config 00:02:19.791 net/mvpp2: not in enabled drivers build config 00:02:19.791 net/netvsc: not in enabled drivers build config 00:02:19.791 net/nfb: not in enabled drivers build config 00:02:19.791 net/nfp: not in enabled drivers build config 00:02:19.791 net/ngbe: not in enabled drivers build config 00:02:19.791 net/null: not in enabled drivers build config 00:02:19.791 net/octeontx: not in enabled drivers build config 00:02:19.792 net/octeon_ep: not in enabled drivers build config 00:02:19.792 net/pcap: not in enabled drivers build config 00:02:19.792 net/pfe: not in enabled drivers build config 00:02:19.792 net/qede: not in enabled drivers build config 00:02:19.792 net/ring: not in enabled drivers build config 00:02:19.792 net/sfc: not in enabled drivers build config 00:02:19.792 net/softnic: not in enabled drivers build config 00:02:19.792 net/tap: not in enabled drivers build config 00:02:19.792 net/thunderx: not in enabled drivers build config 00:02:19.792 net/txgbe: not in enabled drivers build config 00:02:19.792 net/vdev_netvsc: not in enabled drivers build config 00:02:19.792 net/vhost: not in enabled drivers build config 00:02:19.792 net/virtio: not in enabled drivers build config 00:02:19.792 net/vmxnet3: not in enabled drivers build config 00:02:19.792 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.792 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.792 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.792 raw/ifpga: not in enabled drivers build config 00:02:19.792 raw/ntb: not in enabled drivers build config 00:02:19.792 raw/skeleton: not in enabled drivers build config 00:02:19.792 crypto/armv8: not in enabled drivers build config 00:02:19.792 crypto/bcmfs: not in enabled drivers build config 00:02:19.792 crypto/caam_jr: not in enabled drivers build config 00:02:19.792 crypto/ccp: not in enabled drivers build config 00:02:19.792 crypto/cnxk: not in enabled drivers build config 00:02:19.792 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.792 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.792 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.792 crypto/mlx5: not in enabled drivers build config 00:02:19.792 crypto/mvsam: not in enabled drivers build config 00:02:19.792 crypto/nitrox: not in enabled drivers build config 00:02:19.792 crypto/null: not in enabled drivers build config 00:02:19.792 crypto/octeontx: not in enabled drivers build config 00:02:19.792 crypto/openssl: not in enabled drivers build config 00:02:19.792 crypto/scheduler: not in enabled drivers build config 00:02:19.792 crypto/uadk: not in enabled drivers build config 00:02:19.792 crypto/virtio: not in enabled drivers build config 00:02:19.792 compress/isal: not in enabled drivers build config 00:02:19.792 compress/mlx5: not in enabled drivers build config 00:02:19.792 compress/octeontx: not in enabled drivers build config 00:02:19.792 compress/zlib: not in enabled drivers build config 00:02:19.792 regex/mlx5: not in enabled drivers build config 00:02:19.792 regex/cn9k: not in enabled drivers build config 00:02:19.792 ml/cnxk: not in enabled drivers build config 00:02:19.792 vdpa/ifc: not in enabled drivers build config 00:02:19.792 vdpa/mlx5: not in enabled drivers build config 00:02:19.792 vdpa/nfp: not in enabled drivers build config 00:02:19.792 vdpa/sfc: not in enabled drivers build config 00:02:19.792 event/cnxk: not in enabled drivers build config 00:02:19.792 event/dlb2: not in enabled drivers build config 00:02:19.792 event/dpaa: not in enabled drivers build config 00:02:19.792 event/dpaa2: not in enabled drivers build config 00:02:19.792 event/dsw: not in enabled drivers build config 00:02:19.792 event/opdl: not in enabled drivers build config 00:02:19.792 event/skeleton: not in enabled drivers build config 00:02:19.792 event/sw: not in enabled drivers build config 00:02:19.792 event/octeontx: not in enabled drivers build config 00:02:19.792 baseband/acc: not in enabled drivers build config 00:02:19.792 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.792 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.792 baseband/la12xx: not in enabled drivers build config 00:02:19.792 baseband/null: not in enabled drivers build config 00:02:19.792 baseband/turbo_sw: not in enabled drivers build config 00:02:19.792 gpu/cuda: not in enabled drivers build config 00:02:19.792 00:02:19.792 00:02:19.792 Build targets in project: 220 00:02:19.792 00:02:19.792 DPDK 23.11.0 00:02:19.792 00:02:19.792 User defined options 00:02:19.792 libdir : lib 00:02:19.792 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:19.792 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.792 c_link_args : 00:02:19.792 enable_docs : false 00:02:19.792 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:19.792 enable_kmods : false 00:02:19.792 machine : native 00:02:19.792 tests : false 00:02:19.792 00:02:19.792 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.792 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.792 00:29:35 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:19.792 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:19.792 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.792 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.792 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.792 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.792 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.792 [6/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.792 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.792 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.792 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.792 [10/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.055 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:20.055 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.055 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:20.055 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:20.055 [15/710] Linking static target lib/librte_kvargs.a 00:02:20.055 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.055 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.055 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.055 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:20.317 [20/710] Linking static target lib/librte_log.a 00:02:20.317 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.317 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.894 [23/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.894 [24/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.894 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.894 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.894 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.894 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.894 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.894 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.894 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.894 [32/710] Linking target lib/librte_log.so.24.0 00:02:20.895 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.895 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.895 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.895 [36/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.895 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.895 [38/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.895 [39/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.895 [40/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.158 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.158 [42/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.158 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.158 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.158 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.158 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.158 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.158 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:21.158 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.158 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.158 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.158 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.158 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.158 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:21.158 [55/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.158 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:21.158 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.158 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.158 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.421 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.421 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.421 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.421 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.685 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.685 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.685 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.685 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.685 [68/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.685 [69/710] Linking static target lib/librte_pci.a 00:02:21.685 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.950 [71/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.950 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.950 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.951 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.951 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.951 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.212 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.212 [78/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.212 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.212 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.212 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.212 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.212 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.212 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.212 [85/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.212 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.212 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.212 [88/710] Linking static target lib/librte_ring.a 00:02:22.212 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.212 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.212 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.212 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.212 [93/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.212 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.479 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.479 [96/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.479 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.479 [98/710] Linking static target lib/librte_meter.a 00:02:22.479 [99/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.479 [100/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.479 [101/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.479 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.479 [103/710] Linking static target lib/librte_telemetry.a 00:02:22.479 [104/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.479 [105/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.479 [106/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.479 [107/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.479 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.745 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.745 [110/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.745 [111/710] Linking static target lib/librte_eal.a 00:02:22.745 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.745 [113/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.745 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.745 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.745 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.745 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.006 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.006 [119/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.006 [120/710] Linking static target lib/librte_net.a 00:02:23.006 [121/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.006 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.006 [123/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.006 [124/710] Linking static target lib/librte_mempool.a 00:02:23.006 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.006 [126/710] Linking static target lib/librte_cmdline.a 00:02:23.270 [127/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.270 [128/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.270 [129/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.270 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.270 [131/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.270 [132/710] Linking static target lib/librte_cfgfile.a 00:02:23.270 [133/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:23.270 [134/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.532 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:23.532 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:23.532 [137/710] Linking static target lib/librte_metrics.a 00:02:23.532 [138/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:23.532 [139/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:23.532 [140/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:23.799 [141/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.799 [142/710] Linking static target lib/librte_rcu.a 00:02:23.799 [143/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:23.799 [144/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:23.799 [145/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:23.799 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.799 [147/710] Linking static target lib/librte_bitratestats.a 00:02:23.799 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:23.799 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:23.799 [150/710] Linking target lib/librte_kvargs.so.24.0 00:02:23.799 [151/710] Linking target lib/librte_telemetry.so.24.0 00:02:24.066 [152/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.067 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.067 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.067 [155/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:24.067 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.067 [157/710] Linking static target lib/librte_timer.a 00:02:24.067 [158/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:24.067 [159/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.067 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.067 [161/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.067 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.067 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.067 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.330 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.330 [166/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.330 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:24.330 [168/710] Linking static target lib/librte_bbdev.a 00:02:24.330 [169/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:24.330 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.597 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.597 [172/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.597 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.597 [174/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.597 [175/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.597 [176/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.870 [177/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.870 [178/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.870 [179/710] Linking static target lib/librte_compressdev.a 00:02:24.870 [180/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:24.870 [181/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.870 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.136 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.136 [184/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.402 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.402 [186/710] Linking static target lib/librte_distributor.a 00:02:25.402 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.402 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:25.402 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.402 [190/710] Linking static target lib/librte_bpf.a 00:02:25.402 [191/710] Linking static target lib/librte_dmadev.a 00:02:25.402 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.402 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.671 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.671 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.671 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.671 [197/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.671 [198/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:25.671 [199/710] Linking static target lib/librte_dispatcher.a 00:02:25.671 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:25.671 [201/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.671 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.671 [203/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.939 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:25.939 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.939 [206/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.939 [207/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.939 [208/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.939 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.939 [210/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.939 [211/710] Linking static target lib/librte_gpudev.a 00:02:25.939 [212/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.939 [213/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.939 [214/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.939 [215/710] Linking static target lib/librte_gro.a 00:02:25.939 [216/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.939 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.939 [218/710] Linking static target lib/librte_jobstats.a 00:02:26.200 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.200 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:26.200 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:26.200 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.465 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.465 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.465 [225/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:26.465 [226/710] Linking static target lib/librte_latencystats.a 00:02:26.465 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.465 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.465 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:26.731 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.731 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.731 [232/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.731 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:26.731 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.731 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:26.731 [236/710] Linking static target lib/librte_ip_frag.a 00:02:26.731 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.731 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:26.995 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:26.995 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.995 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.995 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.995 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:27.258 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:27.259 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.259 [246/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.259 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:27.259 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:27.525 [249/710] Linking static target lib/librte_gso.a 00:02:27.525 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:27.525 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.525 [252/710] Linking static target lib/librte_regexdev.a 00:02:27.525 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:27.525 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:27.525 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:27.525 [256/710] Linking static target lib/librte_rawdev.a 00:02:27.525 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:27.525 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:27.792 [259/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:27.792 [260/710] Linking static target lib/librte_efd.a 00:02:27.792 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:27.792 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:27.792 [263/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.792 [264/710] Linking static target lib/librte_mldev.a 00:02:27.792 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:27.792 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:27.792 [267/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:27.792 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.792 [269/710] Linking static target lib/librte_pcapng.a 00:02:27.792 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.792 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:02:28.057 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.057 [273/710] Linking static target lib/librte_stack.a 00:02:28.057 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:28.057 [275/710] Linking static target lib/librte_lpm.a 00:02:28.057 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:28.057 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.057 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.057 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:28.322 [280/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.322 [281/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:28.322 [282/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.322 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.322 [284/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.322 [285/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.322 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.322 [287/710] Linking static target lib/librte_hash.a 00:02:28.322 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:28.322 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:02:28.322 [290/710] Linking static target lib/librte_acl.a 00:02:28.588 [291/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.588 [292/710] Linking static target lib/librte_reorder.a 00:02:28.588 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.588 [294/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.588 [295/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.588 [296/710] Linking static target lib/librte_power.a 00:02:28.588 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.588 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.588 [299/710] Linking static target lib/librte_security.a 00:02:28.854 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.854 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:28.854 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.855 [303/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.855 [304/710] Linking static target lib/librte_mbuf.a 00:02:28.855 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:28.855 [306/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.119 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.119 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.119 [309/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:29.119 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:29.119 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:29.119 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:29.119 [313/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.119 [314/710] Linking static target lib/librte_rib.a 00:02:29.384 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:29.384 [316/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.384 [317/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:29.384 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.384 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:29.384 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:29.384 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:29.384 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:29.384 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:29.384 [324/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:29.650 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:29.650 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.650 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.650 [328/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.913 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.913 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:29.913 [331/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.913 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:29.913 [333/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:30.177 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:30.177 [335/710] Linking static target lib/librte_eventdev.a 00:02:30.177 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.177 [337/710] Linking static target lib/librte_member.a 00:02:30.177 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.444 [339/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.444 [340/710] Linking static target lib/librte_cryptodev.a 00:02:30.444 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.444 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:30.444 [343/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.444 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:30.444 [345/710] Linking static target lib/librte_ethdev.a 00:02:30.711 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:30.711 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.711 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:30.711 [349/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:30.711 [350/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:30.711 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:30.711 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.711 [353/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:30.711 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:30.711 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:30.711 [356/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.711 [357/710] Linking static target lib/librte_sched.a 00:02:30.978 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.978 [359/710] Linking static target lib/librte_fib.a 00:02:30.978 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:30.978 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:30.978 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:30.978 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:30.978 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:31.246 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:31.246 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:31.246 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.246 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:31.246 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:31.510 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.510 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.510 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.510 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:31.778 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:31.778 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:31.778 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:31.778 [377/710] Linking static target lib/librte_pdump.a 00:02:31.778 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.778 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:31.778 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:32.041 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:32.041 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:32.041 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:32.041 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:32.041 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:32.041 [386/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:32.041 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:32.041 [388/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.307 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:32.307 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.307 [391/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:32.307 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:32.307 [393/710] Linking static target lib/librte_ipsec.a 00:02:32.307 [394/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:32.307 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.573 [396/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:32.573 [397/710] Linking static target lib/librte_table.a 00:02:32.573 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:32.841 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:32.841 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:32.841 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.841 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:33.117 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:33.117 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:33.117 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:33.117 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:33.377 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.377 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:33.377 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.377 [410/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:33.377 [411/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:33.377 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.378 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:33.378 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.647 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.647 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.647 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:33.647 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.647 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.915 [420/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.915 [421/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:33.915 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:33.915 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.915 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.915 [425/710] Linking target lib/librte_eal.so.24.0 00:02:33.915 [426/710] Linking static target drivers/librte_bus_vdev.a 00:02:33.915 [427/710] Linking static target lib/librte_port.a 00:02:34.181 [428/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:34.181 [429/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.181 [430/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:34.181 [431/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:34.181 [432/710] Linking static target lib/librte_graph.a 00:02:34.181 [433/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.181 [434/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:34.181 [435/710] Linking static target drivers/librte_bus_pci.a 00:02:34.447 [436/710] Linking target lib/librte_ring.so.24.0 00:02:34.447 [437/710] Linking target lib/librte_meter.so.24.0 00:02:34.447 [438/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:34.447 [439/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:34.447 [440/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:34.447 [441/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.447 [442/710] Linking target lib/librte_pci.so.24.0 00:02:34.447 [443/710] Linking target lib/librte_timer.so.24.0 00:02:34.447 [444/710] Linking target lib/librte_cfgfile.so.24.0 00:02:34.447 [445/710] Linking target lib/librte_acl.so.24.0 00:02:34.716 [446/710] Linking target lib/librte_dmadev.so.24.0 00:02:34.716 [447/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:34.716 [448/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:34.716 [449/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:34.716 [450/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:34.716 [451/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:34.716 [452/710] Linking target lib/librte_rcu.so.24.0 00:02:34.716 [453/710] Linking target lib/librte_mempool.so.24.0 00:02:34.716 [454/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:34.716 [455/710] Linking target lib/librte_jobstats.so.24.0 00:02:34.716 [456/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:34.716 [457/710] Linking target lib/librte_stack.so.24.0 00:02:34.716 [458/710] Linking target lib/librte_rawdev.so.24.0 00:02:34.716 [459/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.716 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.716 [461/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:34.716 [462/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:34.716 [463/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:34.716 [464/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:34.716 [465/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:34.983 [466/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.983 [467/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:34.983 [468/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:34.983 [469/710] Linking target lib/librte_mbuf.so.24.0 00:02:34.983 [470/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:34.983 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:35.247 [472/710] Linking target lib/librte_rib.so.24.0 00:02:35.247 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.247 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.247 [475/710] Linking static target drivers/librte_mempool_ring.a 00:02:35.247 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:35.247 [477/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:35.247 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:35.247 [479/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:35.247 [480/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.247 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:35.247 [482/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:35.247 [483/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:35.247 [484/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.247 [485/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:35.247 [486/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:35.247 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:35.247 [488/710] Linking target lib/librte_net.so.24.0 00:02:35.512 [489/710] Linking target lib/librte_bbdev.so.24.0 00:02:35.512 [490/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:35.512 [491/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:35.512 [492/710] Linking target lib/librte_compressdev.so.24.0 00:02:35.512 [493/710] Linking target lib/librte_gpudev.so.24.0 00:02:35.512 [494/710] Linking target lib/librte_distributor.so.24.0 00:02:35.512 [495/710] Linking target lib/librte_cryptodev.so.24.0 00:02:35.512 [496/710] Linking target lib/librte_reorder.so.24.0 00:02:35.512 [497/710] Linking target lib/librte_mldev.so.24.0 00:02:35.512 [498/710] Linking target lib/librte_regexdev.so.24.0 00:02:35.512 [499/710] Linking target lib/librte_sched.so.24.0 00:02:35.512 [500/710] Linking target lib/librte_fib.so.24.0 00:02:35.512 [501/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:35.512 [502/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:35.512 [503/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.512 [504/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:35.780 [505/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:35.780 [506/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:35.780 [507/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:35.780 [508/710] Linking target lib/librte_cmdline.so.24.0 00:02:35.780 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:35.780 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:35.780 [511/710] Linking target lib/librte_hash.so.24.0 00:02:35.780 [512/710] Linking target lib/librte_security.so.24.0 00:02:35.780 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:35.780 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:36.048 [515/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:36.048 [516/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:36.048 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:36.048 [518/710] Linking target lib/librte_efd.so.24.0 00:02:36.048 [519/710] Linking target lib/librte_lpm.so.24.0 00:02:36.048 [520/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:36.048 [521/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:36.048 [522/710] Linking target lib/librte_member.so.24.0 00:02:36.048 [523/710] Linking target lib/librte_ipsec.so.24.0 00:02:36.307 [524/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:36.307 [525/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:36.307 [526/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:36.307 [527/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:36.572 [528/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:36.572 [529/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:36.572 [530/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:36.572 [531/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:36.572 [532/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:36.839 [533/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:36.839 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:36.839 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:36.839 [536/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:36.839 [537/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:36.839 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:36.839 [539/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:37.102 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:37.102 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:37.365 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:37.365 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:37.365 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:37.365 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:37.631 [546/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:37.631 [547/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:37.631 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:37.631 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:37.631 [550/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:37.631 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:37.631 [552/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:37.894 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:37.894 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:37.894 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:37.894 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:38.171 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:38.171 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:38.171 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:38.436 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:38.436 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:38.699 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:38.965 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:38.965 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:38.965 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:38.965 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.965 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:38.965 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:38.965 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:39.229 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:39.229 [571/710] Linking target lib/librte_ethdev.so.24.0 00:02:39.229 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:39.229 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:39.229 [574/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:39.229 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:39.229 [576/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:39.229 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:39.493 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:39.493 [579/710] Linking target lib/librte_metrics.so.24.0 00:02:39.493 [580/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:39.493 [581/710] Linking target lib/librte_bpf.so.24.0 00:02:39.493 [582/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:39.493 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:39.493 [584/710] Linking target lib/librte_gro.so.24.0 00:02:39.493 [585/710] Linking target lib/librte_eventdev.so.24.0 00:02:39.493 [586/710] Linking target lib/librte_gso.so.24.0 00:02:39.493 [587/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:39.493 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:39.493 [589/710] Linking target lib/librte_pcapng.so.24.0 00:02:39.493 [590/710] Linking static target lib/librte_pdcp.a 00:02:39.760 [591/710] Linking target lib/librte_ip_frag.so.24.0 00:02:39.760 [592/710] Linking target lib/librte_power.so.24.0 00:02:39.760 [593/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:39.760 [594/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:39.760 [595/710] Linking target lib/librte_bitratestats.so.24.0 00:02:39.760 [596/710] Linking target lib/librte_latencystats.so.24.0 00:02:39.760 [597/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:39.760 [598/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:39.760 [599/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:39.760 [600/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:39.760 [601/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:39.760 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:39.760 [603/710] Linking target lib/librte_dispatcher.so.24.0 00:02:40.023 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:40.023 [605/710] Linking target lib/librte_pdump.so.24.0 00:02:40.023 [606/710] Linking target lib/librte_graph.so.24.0 00:02:40.023 [607/710] Linking target lib/librte_port.so.24.0 00:02:40.023 [608/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:40.023 [609/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:40.283 [610/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.283 [611/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:40.283 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:40.283 [613/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:40.283 [614/710] Linking target lib/librte_pdcp.so.24.0 00:02:40.283 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:40.283 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:40.283 [617/710] Linking target lib/librte_table.so.24.0 00:02:40.283 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:40.283 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:40.283 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:40.554 [621/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:40.554 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:40.554 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:40.554 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:40.554 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:40.813 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:40.813 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:40.813 [628/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:40.813 [629/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:41.072 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:41.332 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:41.332 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:41.332 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:41.591 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:41.591 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:41.591 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:41.591 [637/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:41.591 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:41.591 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:41.591 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:41.851 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:41.851 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:41.851 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:41.851 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.110 [645/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:42.110 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:42.110 [647/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:42.110 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:42.110 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:42.370 [650/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:42.630 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:42.630 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:42.630 [653/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:42.630 [654/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:42.630 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:42.630 [656/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:42.630 [657/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:42.630 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:42.889 [659/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:43.169 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:43.169 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:43.169 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:43.169 [663/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:43.169 [664/710] Linking static target drivers/librte_net_i40e.a 00:02:43.428 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.428 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:43.687 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:43.687 [668/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:43.945 [669/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.945 [670/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:43.945 [671/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:44.203 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:44.203 [673/710] Linking static target lib/librte_node.a 00:02:44.203 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:44.461 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.461 [676/710] Linking target lib/librte_node.so.24.0 00:02:45.834 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:46.092 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:46.350 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:47.726 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:48.292 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:53.558 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:32.265 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:32.265 [684/710] Linking static target lib/librte_vhost.a 00:03:32.265 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.265 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:37.541 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:37.541 [688/710] Linking static target lib/librte_pipeline.a 00:03:37.541 [689/710] Linking target app/dpdk-test-cmdline 00:03:37.541 [690/710] Linking target app/dpdk-pdump 00:03:37.541 [691/710] Linking target app/dpdk-test-acl 00:03:37.541 [692/710] Linking target app/dpdk-test-pipeline 00:03:37.541 [693/710] Linking target app/dpdk-graph 00:03:37.541 [694/710] Linking target app/dpdk-test-flow-perf 00:03:37.541 [695/710] Linking target app/dpdk-test-mldev 00:03:37.541 [696/710] Linking target app/dpdk-test-crypto-perf 00:03:37.541 [697/710] Linking target app/dpdk-test-eventdev 00:03:37.541 [698/710] Linking target app/dpdk-test-sad 00:03:37.541 [699/710] Linking target app/dpdk-proc-info 00:03:37.541 [700/710] Linking target app/dpdk-test-dma-perf 00:03:37.541 [701/710] Linking target app/dpdk-test-gpudev 00:03:37.541 [702/710] Linking target app/dpdk-test-regex 00:03:37.541 [703/710] Linking target app/dpdk-dumpcap 00:03:37.541 [704/710] Linking target app/dpdk-test-fib 00:03:37.541 [705/710] Linking target app/dpdk-test-security-perf 00:03:37.541 [706/710] Linking target app/dpdk-test-bbdev 00:03:37.541 [707/710] Linking target app/dpdk-test-compress-perf 00:03:37.541 [708/710] Linking target app/dpdk-testpmd 00:03:40.081 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.081 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:40.081 00:30:55 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:40.081 00:30:55 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:40.081 00:30:55 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:40.081 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:40.081 [0/1] Installing files. 00:03:40.081 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.085 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.086 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:40.087 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:40.087 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.346 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.347 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.920 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.920 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.921 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.921 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.921 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.921 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:40.921 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.921 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:40.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:40.925 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:40.925 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:40.925 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:40.925 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:40.925 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:40.925 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:40.925 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:40.925 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:40.925 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:40.925 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:40.925 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:40.925 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:40.925 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:40.925 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:40.925 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:40.925 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:40.925 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:40.925 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:40.925 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:40.925 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:40.925 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:40.925 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:40.925 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:40.925 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:40.925 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:40.925 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:40.925 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:40.925 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:40.925 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:40.925 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:40.925 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:40.925 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:40.925 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:40.925 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:40.925 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:40.925 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:40.925 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:40.925 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:40.925 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:40.925 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:40.925 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:40.925 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:40.925 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:40.925 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:40.925 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:40.925 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:40.925 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:40.925 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:40.925 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:40.925 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:40.925 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:40.925 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:40.925 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:40.925 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:40.925 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:40.925 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:40.926 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:40.926 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:40.926 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:40.926 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:40.926 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:40.926 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:40.926 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:40.926 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:40.926 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:40.926 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:40.926 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:40.926 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:40.926 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:40.926 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:40.926 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:40.926 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:40.926 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:40.926 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:40.926 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:40.926 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:40.926 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:40.926 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:40.926 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:40.926 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:40.926 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:40.926 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:40.926 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:40.926 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:40.926 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:40.926 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:40.926 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:40.926 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:40.926 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:40.926 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:40.926 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:40.926 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:40.926 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:40.926 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:40.926 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:40.926 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:40.926 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:40.926 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:40.926 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:40.926 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:40.926 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:40.926 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:40.926 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:40.926 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:40.926 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:40.926 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:40.926 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:40.926 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:40.926 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:40.926 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:40.926 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:40.926 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:40.926 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:40.926 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:40.926 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:40.926 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:40.926 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:40.926 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:40.926 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:40.926 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:40.926 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:40.926 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:40.926 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:40.926 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:40.926 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:40.926 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:40.926 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:40.926 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:40.926 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:40.926 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:40.926 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:40.926 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:40.926 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:40.926 00:30:57 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:40.926 00:30:57 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.926 00:03:40.926 real 1m29.351s 00:03:40.926 user 18m5.598s 00:03:40.926 sys 2m12.029s 00:03:40.926 00:30:57 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.926 00:30:57 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:40.926 ************************************ 00:03:40.926 END TEST build_native_dpdk 00:03:40.926 ************************************ 00:03:40.926 00:30:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:40.926 00:30:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:40.926 00:30:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:41.187 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:41.187 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:41.187 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:41.187 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:41.446 Using 'verbs' RDMA provider 00:03:52.396 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:02.393 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:02.393 Creating mk/config.mk...done. 00:04:02.393 Creating mk/cc.flags.mk...done. 00:04:02.393 Type 'make' to build. 00:04:02.393 00:31:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:02.393 00:31:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:02.393 00:31:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:02.393 00:31:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.393 ************************************ 00:04:02.393 START TEST make 00:04:02.393 ************************************ 00:04:02.393 00:31:18 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:02.393 make[1]: Nothing to be done for 'all'. 00:04:04.315 The Meson build system 00:04:04.315 Version: 1.5.0 00:04:04.315 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:04.315 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:04.315 Build type: native build 00:04:04.315 Project name: libvfio-user 00:04:04.315 Project version: 0.0.1 00:04:04.315 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:04.315 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:04.315 Host machine cpu family: x86_64 00:04:04.315 Host machine cpu: x86_64 00:04:04.315 Run-time dependency threads found: YES 00:04:04.315 Library dl found: YES 00:04:04.315 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:04.315 Run-time dependency json-c found: YES 0.17 00:04:04.315 Run-time dependency cmocka found: YES 1.1.7 00:04:04.315 Program pytest-3 found: NO 00:04:04.315 Program flake8 found: NO 00:04:04.315 Program misspell-fixer found: NO 00:04:04.315 Program restructuredtext-lint found: NO 00:04:04.315 Program valgrind found: YES (/usr/bin/valgrind) 00:04:04.315 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:04.315 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:04.315 Compiler for C supports arguments -Wwrite-strings: YES 00:04:04.315 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:04.315 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:04.315 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:04.315 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:04.315 Build targets in project: 8 00:04:04.315 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:04.315 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:04.315 00:04:04.315 libvfio-user 0.0.1 00:04:04.315 00:04:04.315 User defined options 00:04:04.315 buildtype : debug 00:04:04.315 default_library: shared 00:04:04.315 libdir : /usr/local/lib 00:04:04.315 00:04:04.315 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:05.284 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:05.284 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:05.284 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:05.284 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:05.284 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:05.284 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:05.284 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:05.284 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:05.284 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:05.284 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:05.284 [10/37] Compiling C object samples/null.p/null.c.o 00:04:05.284 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:05.284 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:05.284 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:05.284 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:05.284 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:05.548 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:05.548 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:05.548 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:05.548 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:05.548 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:05.548 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:05.548 [22/37] Compiling C object samples/server.p/server.c.o 00:04:05.548 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:05.548 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:05.548 [25/37] Compiling C object samples/client.p/client.c.o 00:04:05.548 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:05.548 [27/37] Linking target samples/client 00:04:05.548 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:05.548 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:05.548 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:05.548 [31/37] Linking target test/unit_tests 00:04:05.812 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:05.812 [33/37] Linking target samples/null 00:04:05.812 [34/37] Linking target samples/gpio-pci-idio-16 00:04:05.812 [35/37] Linking target samples/shadow_ioeventfd_server 00:04:05.812 [36/37] Linking target samples/lspci 00:04:05.812 [37/37] Linking target samples/server 00:04:05.812 INFO: autodetecting backend as ninja 00:04:05.813 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.076 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.653 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:06.653 ninja: no work to do. 00:04:45.365 CC lib/log/log.o 00:04:45.365 CC lib/log/log_flags.o 00:04:45.365 CC lib/log/log_deprecated.o 00:04:45.365 CC lib/ut_mock/mock.o 00:04:45.365 CC lib/ut/ut.o 00:04:45.365 LIB libspdk_log.a 00:04:45.365 LIB libspdk_ut.a 00:04:45.365 LIB libspdk_ut_mock.a 00:04:45.365 SO libspdk_ut_mock.so.6.0 00:04:45.365 SO libspdk_ut.so.2.0 00:04:45.365 SO libspdk_log.so.7.1 00:04:45.365 SYMLINK libspdk_ut_mock.so 00:04:45.365 SYMLINK libspdk_ut.so 00:04:45.365 SYMLINK libspdk_log.so 00:04:45.365 CC lib/ioat/ioat.o 00:04:45.365 CC lib/dma/dma.o 00:04:45.365 CXX lib/trace_parser/trace.o 00:04:45.365 CC lib/util/base64.o 00:04:45.365 CC lib/util/bit_array.o 00:04:45.365 CC lib/util/cpuset.o 00:04:45.365 CC lib/util/crc16.o 00:04:45.365 CC lib/util/crc32.o 00:04:45.365 CC lib/util/crc32c.o 00:04:45.365 CC lib/util/crc32_ieee.o 00:04:45.365 CC lib/util/crc64.o 00:04:45.365 CC lib/util/dif.o 00:04:45.365 CC lib/util/fd.o 00:04:45.365 CC lib/util/fd_group.o 00:04:45.365 CC lib/util/file.o 00:04:45.365 CC lib/util/hexlify.o 00:04:45.365 CC lib/util/iov.o 00:04:45.365 CC lib/util/math.o 00:04:45.365 CC lib/util/net.o 00:04:45.365 CC lib/util/pipe.o 00:04:45.365 CC lib/util/strerror_tls.o 00:04:45.365 CC lib/util/uuid.o 00:04:45.365 CC lib/util/string.o 00:04:45.365 CC lib/util/zipf.o 00:04:45.365 CC lib/util/xor.o 00:04:45.365 CC lib/util/md5.o 00:04:45.365 CC lib/vfio_user/host/vfio_user_pci.o 00:04:45.365 CC lib/vfio_user/host/vfio_user.o 00:04:45.365 LIB libspdk_dma.a 00:04:45.365 SO libspdk_dma.so.5.0 00:04:45.365 LIB libspdk_ioat.a 00:04:45.365 SYMLINK libspdk_dma.so 00:04:45.365 SO libspdk_ioat.so.7.0 00:04:45.365 SYMLINK libspdk_ioat.so 00:04:45.365 LIB libspdk_vfio_user.a 00:04:45.365 SO libspdk_vfio_user.so.5.0 00:04:45.365 SYMLINK libspdk_vfio_user.so 00:04:45.365 LIB libspdk_util.a 00:04:45.365 SO libspdk_util.so.10.1 00:04:45.365 SYMLINK libspdk_util.so 00:04:45.365 CC lib/conf/conf.o 00:04:45.365 CC lib/rdma_utils/rdma_utils.o 00:04:45.365 CC lib/vmd/vmd.o 00:04:45.365 CC lib/env_dpdk/env.o 00:04:45.365 CC lib/env_dpdk/memory.o 00:04:45.365 CC lib/vmd/led.o 00:04:45.365 CC lib/env_dpdk/pci.o 00:04:45.365 CC lib/idxd/idxd.o 00:04:45.365 CC lib/json/json_parse.o 00:04:45.365 CC lib/env_dpdk/init.o 00:04:45.365 CC lib/json/json_util.o 00:04:45.365 CC lib/idxd/idxd_user.o 00:04:45.365 CC lib/env_dpdk/threads.o 00:04:45.365 CC lib/json/json_write.o 00:04:45.365 CC lib/env_dpdk/pci_ioat.o 00:04:45.365 CC lib/idxd/idxd_kernel.o 00:04:45.365 CC lib/env_dpdk/pci_virtio.o 00:04:45.365 CC lib/env_dpdk/pci_vmd.o 00:04:45.365 CC lib/env_dpdk/pci_idxd.o 00:04:45.365 CC lib/env_dpdk/sigbus_handler.o 00:04:45.365 CC lib/env_dpdk/pci_event.o 00:04:45.365 CC lib/env_dpdk/pci_dpdk.o 00:04:45.365 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:45.365 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:45.365 LIB libspdk_conf.a 00:04:45.365 SO libspdk_conf.so.6.0 00:04:45.365 LIB libspdk_rdma_utils.a 00:04:45.365 LIB libspdk_json.a 00:04:45.365 SYMLINK libspdk_conf.so 00:04:45.365 SO libspdk_rdma_utils.so.1.0 00:04:45.365 SO libspdk_json.so.6.0 00:04:45.365 SYMLINK libspdk_rdma_utils.so 00:04:45.365 SYMLINK libspdk_json.so 00:04:45.365 CC lib/rdma_provider/common.o 00:04:45.365 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:45.365 CC lib/jsonrpc/jsonrpc_server.o 00:04:45.365 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:45.365 CC lib/jsonrpc/jsonrpc_client.o 00:04:45.365 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:45.365 LIB libspdk_idxd.a 00:04:45.365 SO libspdk_idxd.so.12.1 00:04:45.365 LIB libspdk_vmd.a 00:04:45.365 SYMLINK libspdk_idxd.so 00:04:45.365 SO libspdk_vmd.so.6.0 00:04:45.365 SYMLINK libspdk_vmd.so 00:04:45.365 LIB libspdk_rdma_provider.a 00:04:45.365 SO libspdk_rdma_provider.so.7.0 00:04:45.365 LIB libspdk_jsonrpc.a 00:04:45.365 SYMLINK libspdk_rdma_provider.so 00:04:45.365 SO libspdk_jsonrpc.so.6.0 00:04:45.365 LIB libspdk_trace_parser.a 00:04:45.624 SO libspdk_trace_parser.so.6.0 00:04:45.624 SYMLINK libspdk_jsonrpc.so 00:04:45.624 SYMLINK libspdk_trace_parser.so 00:04:45.624 CC lib/rpc/rpc.o 00:04:45.883 LIB libspdk_rpc.a 00:04:45.883 SO libspdk_rpc.so.6.0 00:04:45.883 SYMLINK libspdk_rpc.so 00:04:46.142 CC lib/trace/trace.o 00:04:46.142 CC lib/keyring/keyring.o 00:04:46.142 CC lib/trace/trace_flags.o 00:04:46.142 CC lib/notify/notify.o 00:04:46.142 CC lib/keyring/keyring_rpc.o 00:04:46.142 CC lib/trace/trace_rpc.o 00:04:46.142 CC lib/notify/notify_rpc.o 00:04:46.401 LIB libspdk_notify.a 00:04:46.401 SO libspdk_notify.so.6.0 00:04:46.401 SYMLINK libspdk_notify.so 00:04:46.401 LIB libspdk_keyring.a 00:04:46.401 LIB libspdk_trace.a 00:04:46.401 SO libspdk_keyring.so.2.0 00:04:46.401 SO libspdk_trace.so.11.0 00:04:46.401 SYMLINK libspdk_keyring.so 00:04:46.401 SYMLINK libspdk_trace.so 00:04:46.660 LIB libspdk_env_dpdk.a 00:04:46.660 CC lib/sock/sock.o 00:04:46.660 CC lib/sock/sock_rpc.o 00:04:46.660 CC lib/thread/thread.o 00:04:46.660 CC lib/thread/iobuf.o 00:04:46.660 SO libspdk_env_dpdk.so.15.1 00:04:46.921 SYMLINK libspdk_env_dpdk.so 00:04:47.180 LIB libspdk_sock.a 00:04:47.180 SO libspdk_sock.so.10.0 00:04:47.180 SYMLINK libspdk_sock.so 00:04:47.440 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:47.440 CC lib/nvme/nvme_ctrlr.o 00:04:47.440 CC lib/nvme/nvme_fabric.o 00:04:47.440 CC lib/nvme/nvme_ns_cmd.o 00:04:47.440 CC lib/nvme/nvme_ns.o 00:04:47.440 CC lib/nvme/nvme_pcie_common.o 00:04:47.440 CC lib/nvme/nvme_pcie.o 00:04:47.440 CC lib/nvme/nvme_qpair.o 00:04:47.440 CC lib/nvme/nvme.o 00:04:47.440 CC lib/nvme/nvme_quirks.o 00:04:47.440 CC lib/nvme/nvme_transport.o 00:04:47.440 CC lib/nvme/nvme_discovery.o 00:04:47.440 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:47.440 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:47.440 CC lib/nvme/nvme_tcp.o 00:04:47.440 CC lib/nvme/nvme_opal.o 00:04:47.440 CC lib/nvme/nvme_io_msg.o 00:04:47.440 CC lib/nvme/nvme_poll_group.o 00:04:47.440 CC lib/nvme/nvme_zns.o 00:04:47.440 CC lib/nvme/nvme_stubs.o 00:04:47.440 CC lib/nvme/nvme_auth.o 00:04:47.440 CC lib/nvme/nvme_cuse.o 00:04:47.440 CC lib/nvme/nvme_rdma.o 00:04:47.440 CC lib/nvme/nvme_vfio_user.o 00:04:48.378 LIB libspdk_thread.a 00:04:48.378 SO libspdk_thread.so.11.0 00:04:48.378 SYMLINK libspdk_thread.so 00:04:48.637 CC lib/accel/accel.o 00:04:48.637 CC lib/accel/accel_rpc.o 00:04:48.637 CC lib/accel/accel_sw.o 00:04:48.637 CC lib/fsdev/fsdev.o 00:04:48.637 CC lib/fsdev/fsdev_io.o 00:04:48.637 CC lib/vfu_tgt/tgt_endpoint.o 00:04:48.637 CC lib/fsdev/fsdev_rpc.o 00:04:48.637 CC lib/blob/blobstore.o 00:04:48.637 CC lib/vfu_tgt/tgt_rpc.o 00:04:48.637 CC lib/virtio/virtio.o 00:04:48.637 CC lib/init/json_config.o 00:04:48.637 CC lib/blob/request.o 00:04:48.637 CC lib/virtio/virtio_vhost_user.o 00:04:48.637 CC lib/init/subsystem.o 00:04:48.637 CC lib/virtio/virtio_vfio_user.o 00:04:48.637 CC lib/blob/zeroes.o 00:04:48.637 CC lib/init/subsystem_rpc.o 00:04:48.637 CC lib/virtio/virtio_pci.o 00:04:48.637 CC lib/init/rpc.o 00:04:48.637 CC lib/blob/blob_bs_dev.o 00:04:48.896 LIB libspdk_init.a 00:04:48.896 SO libspdk_init.so.6.0 00:04:48.896 SYMLINK libspdk_init.so 00:04:48.896 LIB libspdk_virtio.a 00:04:48.896 LIB libspdk_vfu_tgt.a 00:04:48.896 SO libspdk_vfu_tgt.so.3.0 00:04:48.896 SO libspdk_virtio.so.7.0 00:04:48.896 SYMLINK libspdk_vfu_tgt.so 00:04:48.896 SYMLINK libspdk_virtio.so 00:04:49.154 CC lib/event/app.o 00:04:49.154 CC lib/event/reactor.o 00:04:49.154 CC lib/event/log_rpc.o 00:04:49.154 CC lib/event/app_rpc.o 00:04:49.154 CC lib/event/scheduler_static.o 00:04:49.154 LIB libspdk_fsdev.a 00:04:49.154 SO libspdk_fsdev.so.2.0 00:04:49.411 SYMLINK libspdk_fsdev.so 00:04:49.411 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:49.411 LIB libspdk_event.a 00:04:49.669 SO libspdk_event.so.14.0 00:04:49.669 SYMLINK libspdk_event.so 00:04:49.669 LIB libspdk_accel.a 00:04:49.669 SO libspdk_accel.so.16.0 00:04:49.669 SYMLINK libspdk_accel.so 00:04:49.928 LIB libspdk_nvme.a 00:04:49.928 CC lib/bdev/bdev.o 00:04:49.928 CC lib/bdev/bdev_rpc.o 00:04:49.928 CC lib/bdev/bdev_zone.o 00:04:49.928 SO libspdk_nvme.so.15.0 00:04:49.928 CC lib/bdev/part.o 00:04:49.928 CC lib/bdev/scsi_nvme.o 00:04:50.187 LIB libspdk_fuse_dispatcher.a 00:04:50.187 SYMLINK libspdk_nvme.so 00:04:50.187 SO libspdk_fuse_dispatcher.so.1.0 00:04:50.187 SYMLINK libspdk_fuse_dispatcher.so 00:04:51.567 LIB libspdk_blob.a 00:04:51.567 SO libspdk_blob.so.12.0 00:04:51.826 SYMLINK libspdk_blob.so 00:04:51.826 CC lib/blobfs/blobfs.o 00:04:51.826 CC lib/blobfs/tree.o 00:04:51.826 CC lib/lvol/lvol.o 00:04:52.760 LIB libspdk_bdev.a 00:04:52.760 SO libspdk_bdev.so.17.0 00:04:52.760 LIB libspdk_blobfs.a 00:04:52.760 SO libspdk_blobfs.so.11.0 00:04:52.760 SYMLINK libspdk_bdev.so 00:04:52.760 SYMLINK libspdk_blobfs.so 00:04:52.760 LIB libspdk_lvol.a 00:04:52.760 SO libspdk_lvol.so.11.0 00:04:53.028 CC lib/ublk/ublk.o 00:04:53.028 CC lib/ublk/ublk_rpc.o 00:04:53.028 CC lib/nbd/nbd.o 00:04:53.028 CC lib/nbd/nbd_rpc.o 00:04:53.028 CC lib/nvmf/ctrlr.o 00:04:53.028 CC lib/scsi/dev.o 00:04:53.028 CC lib/ftl/ftl_core.o 00:04:53.028 CC lib/nvmf/ctrlr_discovery.o 00:04:53.028 CC lib/scsi/lun.o 00:04:53.028 CC lib/ftl/ftl_init.o 00:04:53.028 CC lib/scsi/port.o 00:04:53.028 CC lib/nvmf/ctrlr_bdev.o 00:04:53.028 CC lib/ftl/ftl_layout.o 00:04:53.028 CC lib/scsi/scsi.o 00:04:53.028 CC lib/nvmf/subsystem.o 00:04:53.028 CC lib/ftl/ftl_debug.o 00:04:53.028 CC lib/ftl/ftl_io.o 00:04:53.028 CC lib/scsi/scsi_pr.o 00:04:53.028 CC lib/scsi/scsi_bdev.o 00:04:53.028 CC lib/ftl/ftl_sb.o 00:04:53.028 CC lib/nvmf/nvmf.o 00:04:53.028 CC lib/scsi/scsi_rpc.o 00:04:53.028 CC lib/nvmf/nvmf_rpc.o 00:04:53.028 CC lib/nvmf/transport.o 00:04:53.028 CC lib/ftl/ftl_l2p.o 00:04:53.028 CC lib/scsi/task.o 00:04:53.028 CC lib/nvmf/tcp.o 00:04:53.028 CC lib/ftl/ftl_l2p_flat.o 00:04:53.028 CC lib/ftl/ftl_nv_cache.o 00:04:53.028 CC lib/ftl/ftl_band.o 00:04:53.028 CC lib/nvmf/mdns_server.o 00:04:53.028 CC lib/nvmf/stubs.o 00:04:53.028 CC lib/ftl/ftl_band_ops.o 00:04:53.028 CC lib/nvmf/vfio_user.o 00:04:53.028 CC lib/ftl/ftl_writer.o 00:04:53.028 CC lib/nvmf/rdma.o 00:04:53.028 CC lib/ftl/ftl_rq.o 00:04:53.028 CC lib/nvmf/auth.o 00:04:53.028 CC lib/ftl/ftl_reloc.o 00:04:53.029 CC lib/ftl/ftl_l2p_cache.o 00:04:53.029 CC lib/ftl/ftl_p2l.o 00:04:53.029 CC lib/ftl/ftl_p2l_log.o 00:04:53.029 CC lib/ftl/mngt/ftl_mngt.o 00:04:53.029 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:53.029 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:53.029 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:53.029 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:53.029 SYMLINK libspdk_lvol.so 00:04:53.029 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:53.290 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:53.290 CC lib/ftl/utils/ftl_conf.o 00:04:53.290 CC lib/ftl/utils/ftl_md.o 00:04:53.290 CC lib/ftl/utils/ftl_mempool.o 00:04:53.290 CC lib/ftl/utils/ftl_bitmap.o 00:04:53.290 CC lib/ftl/utils/ftl_property.o 00:04:53.290 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:53.550 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:53.550 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:53.550 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:53.550 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:53.550 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:53.550 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:53.550 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:53.550 CC lib/ftl/base/ftl_base_dev.o 00:04:53.808 CC lib/ftl/base/ftl_base_bdev.o 00:04:53.808 CC lib/ftl/ftl_trace.o 00:04:53.808 LIB libspdk_nbd.a 00:04:53.808 SO libspdk_nbd.so.7.0 00:04:53.808 SYMLINK libspdk_nbd.so 00:04:53.808 LIB libspdk_scsi.a 00:04:54.066 SO libspdk_scsi.so.9.0 00:04:54.066 SYMLINK libspdk_scsi.so 00:04:54.066 LIB libspdk_ublk.a 00:04:54.066 SO libspdk_ublk.so.3.0 00:04:54.066 SYMLINK libspdk_ublk.so 00:04:54.066 CC lib/iscsi/conn.o 00:04:54.066 CC lib/vhost/vhost.o 00:04:54.066 CC lib/iscsi/init_grp.o 00:04:54.066 CC lib/vhost/vhost_rpc.o 00:04:54.066 CC lib/iscsi/iscsi.o 00:04:54.325 CC lib/vhost/vhost_scsi.o 00:04:54.325 CC lib/iscsi/param.o 00:04:54.325 CC lib/vhost/vhost_blk.o 00:04:54.325 CC lib/iscsi/portal_grp.o 00:04:54.325 CC lib/vhost/rte_vhost_user.o 00:04:54.325 CC lib/iscsi/tgt_node.o 00:04:54.325 CC lib/iscsi/iscsi_subsystem.o 00:04:54.325 CC lib/iscsi/iscsi_rpc.o 00:04:54.325 CC lib/iscsi/task.o 00:04:54.583 LIB libspdk_ftl.a 00:04:54.583 SO libspdk_ftl.so.9.0 00:04:54.840 SYMLINK libspdk_ftl.so 00:04:55.407 LIB libspdk_vhost.a 00:04:55.407 LIB libspdk_nvmf.a 00:04:55.407 SO libspdk_vhost.so.8.0 00:04:55.666 SO libspdk_nvmf.so.20.0 00:04:55.666 SYMLINK libspdk_vhost.so 00:04:55.666 LIB libspdk_iscsi.a 00:04:55.666 SO libspdk_iscsi.so.8.0 00:04:55.666 SYMLINK libspdk_nvmf.so 00:04:55.925 SYMLINK libspdk_iscsi.so 00:04:56.185 CC module/vfu_device/vfu_virtio.o 00:04:56.185 CC module/vfu_device/vfu_virtio_blk.o 00:04:56.185 CC module/env_dpdk/env_dpdk_rpc.o 00:04:56.185 CC module/vfu_device/vfu_virtio_scsi.o 00:04:56.185 CC module/vfu_device/vfu_virtio_rpc.o 00:04:56.185 CC module/vfu_device/vfu_virtio_fs.o 00:04:56.185 CC module/scheduler/gscheduler/gscheduler.o 00:04:56.185 CC module/accel/ioat/accel_ioat.o 00:04:56.185 CC module/accel/iaa/accel_iaa.o 00:04:56.185 CC module/blob/bdev/blob_bdev.o 00:04:56.185 CC module/keyring/file/keyring.o 00:04:56.185 CC module/accel/error/accel_error.o 00:04:56.185 CC module/accel/iaa/accel_iaa_rpc.o 00:04:56.185 CC module/accel/error/accel_error_rpc.o 00:04:56.185 CC module/accel/ioat/accel_ioat_rpc.o 00:04:56.185 CC module/keyring/file/keyring_rpc.o 00:04:56.185 CC module/sock/posix/posix.o 00:04:56.185 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:56.185 CC module/fsdev/aio/fsdev_aio.o 00:04:56.185 CC module/accel/dsa/accel_dsa.o 00:04:56.185 CC module/keyring/linux/keyring.o 00:04:56.185 CC module/accel/dsa/accel_dsa_rpc.o 00:04:56.185 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:56.185 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:56.185 CC module/keyring/linux/keyring_rpc.o 00:04:56.185 CC module/fsdev/aio/linux_aio_mgr.o 00:04:56.185 LIB libspdk_env_dpdk_rpc.a 00:04:56.185 SO libspdk_env_dpdk_rpc.so.6.0 00:04:56.444 SYMLINK libspdk_env_dpdk_rpc.so 00:04:56.444 LIB libspdk_keyring_linux.a 00:04:56.444 LIB libspdk_scheduler_gscheduler.a 00:04:56.444 LIB libspdk_scheduler_dpdk_governor.a 00:04:56.444 LIB libspdk_keyring_file.a 00:04:56.444 SO libspdk_scheduler_gscheduler.so.4.0 00:04:56.444 SO libspdk_keyring_linux.so.1.0 00:04:56.444 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:56.444 SO libspdk_keyring_file.so.2.0 00:04:56.444 LIB libspdk_accel_iaa.a 00:04:56.444 LIB libspdk_scheduler_dynamic.a 00:04:56.444 LIB libspdk_accel_ioat.a 00:04:56.444 SYMLINK libspdk_scheduler_gscheduler.so 00:04:56.444 SYMLINK libspdk_keyring_linux.so 00:04:56.444 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:56.444 SO libspdk_scheduler_dynamic.so.4.0 00:04:56.444 SO libspdk_accel_iaa.so.3.0 00:04:56.444 SO libspdk_accel_ioat.so.6.0 00:04:56.444 SYMLINK libspdk_keyring_file.so 00:04:56.444 SYMLINK libspdk_scheduler_dynamic.so 00:04:56.444 LIB libspdk_blob_bdev.a 00:04:56.444 SYMLINK libspdk_accel_iaa.so 00:04:56.444 SYMLINK libspdk_accel_ioat.so 00:04:56.444 LIB libspdk_accel_error.a 00:04:56.444 SO libspdk_blob_bdev.so.12.0 00:04:56.444 SO libspdk_accel_error.so.2.0 00:04:56.703 SYMLINK libspdk_blob_bdev.so 00:04:56.703 SYMLINK libspdk_accel_error.so 00:04:56.703 LIB libspdk_accel_dsa.a 00:04:56.703 SO libspdk_accel_dsa.so.5.0 00:04:56.703 SYMLINK libspdk_accel_dsa.so 00:04:56.704 LIB libspdk_vfu_device.a 00:04:56.967 SO libspdk_vfu_device.so.3.0 00:04:56.967 CC module/bdev/malloc/bdev_malloc.o 00:04:56.967 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.967 CC module/blobfs/bdev/blobfs_bdev.o 00:04:56.967 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:56.967 CC module/bdev/null/bdev_null.o 00:04:56.967 CC module/bdev/null/bdev_null_rpc.o 00:04:56.967 CC module/bdev/ftl/bdev_ftl.o 00:04:56.967 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:56.967 CC module/bdev/delay/vbdev_delay.o 00:04:56.967 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:56.967 CC module/bdev/iscsi/bdev_iscsi.o 00:04:56.967 CC module/bdev/passthru/vbdev_passthru.o 00:04:56.967 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:56.967 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:56.967 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:56.967 CC module/bdev/lvol/vbdev_lvol.o 00:04:56.968 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:56.968 CC module/bdev/gpt/gpt.o 00:04:56.968 CC module/bdev/error/vbdev_error.o 00:04:56.968 CC module/bdev/gpt/vbdev_gpt.o 00:04:56.968 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:56.968 CC module/bdev/error/vbdev_error_rpc.o 00:04:56.968 CC module/bdev/split/vbdev_split.o 00:04:56.968 CC module/bdev/nvme/bdev_nvme.o 00:04:56.968 CC module/bdev/split/vbdev_split_rpc.o 00:04:56.968 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:56.968 CC module/bdev/raid/bdev_raid.o 00:04:56.968 CC module/bdev/nvme/nvme_rpc.o 00:04:56.968 CC module/bdev/raid/bdev_raid_rpc.o 00:04:56.968 CC module/bdev/nvme/bdev_mdns_client.o 00:04:56.968 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:56.968 CC module/bdev/aio/bdev_aio.o 00:04:56.968 CC module/bdev/nvme/vbdev_opal.o 00:04:56.968 CC module/bdev/raid/bdev_raid_sb.o 00:04:56.968 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:56.968 CC module/bdev/raid/raid0.o 00:04:56.968 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:56.968 CC module/bdev/aio/bdev_aio_rpc.o 00:04:56.968 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:56.968 CC module/bdev/raid/raid1.o 00:04:56.968 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:56.968 CC module/bdev/raid/concat.o 00:04:56.968 SYMLINK libspdk_vfu_device.so 00:04:56.968 LIB libspdk_fsdev_aio.a 00:04:56.968 SO libspdk_fsdev_aio.so.1.0 00:04:57.227 LIB libspdk_sock_posix.a 00:04:57.227 SO libspdk_sock_posix.so.6.0 00:04:57.227 SYMLINK libspdk_fsdev_aio.so 00:04:57.227 LIB libspdk_blobfs_bdev.a 00:04:57.227 SO libspdk_blobfs_bdev.so.6.0 00:04:57.227 LIB libspdk_bdev_split.a 00:04:57.227 SYMLINK libspdk_sock_posix.so 00:04:57.227 SO libspdk_bdev_split.so.6.0 00:04:57.227 SYMLINK libspdk_blobfs_bdev.so 00:04:57.486 LIB libspdk_bdev_passthru.a 00:04:57.486 LIB libspdk_bdev_zone_block.a 00:04:57.486 SYMLINK libspdk_bdev_split.so 00:04:57.486 LIB libspdk_bdev_gpt.a 00:04:57.486 LIB libspdk_bdev_null.a 00:04:57.486 SO libspdk_bdev_passthru.so.6.0 00:04:57.486 SO libspdk_bdev_zone_block.so.6.0 00:04:57.486 LIB libspdk_bdev_error.a 00:04:57.486 SO libspdk_bdev_gpt.so.6.0 00:04:57.486 SO libspdk_bdev_null.so.6.0 00:04:57.486 LIB libspdk_bdev_ftl.a 00:04:57.486 SO libspdk_bdev_error.so.6.0 00:04:57.486 SO libspdk_bdev_ftl.so.6.0 00:04:57.486 LIB libspdk_bdev_iscsi.a 00:04:57.486 LIB libspdk_bdev_aio.a 00:04:57.486 SYMLINK libspdk_bdev_passthru.so 00:04:57.486 SYMLINK libspdk_bdev_zone_block.so 00:04:57.486 SYMLINK libspdk_bdev_gpt.so 00:04:57.486 SYMLINK libspdk_bdev_null.so 00:04:57.486 SO libspdk_bdev_aio.so.6.0 00:04:57.486 SO libspdk_bdev_iscsi.so.6.0 00:04:57.486 SYMLINK libspdk_bdev_error.so 00:04:57.486 LIB libspdk_bdev_malloc.a 00:04:57.486 SYMLINK libspdk_bdev_ftl.so 00:04:57.486 LIB libspdk_bdev_delay.a 00:04:57.486 SO libspdk_bdev_malloc.so.6.0 00:04:57.486 SYMLINK libspdk_bdev_aio.so 00:04:57.486 SYMLINK libspdk_bdev_iscsi.so 00:04:57.486 SO libspdk_bdev_delay.so.6.0 00:04:57.486 SYMLINK libspdk_bdev_malloc.so 00:04:57.486 LIB libspdk_bdev_virtio.a 00:04:57.486 SYMLINK libspdk_bdev_delay.so 00:04:57.486 LIB libspdk_bdev_lvol.a 00:04:57.746 SO libspdk_bdev_virtio.so.6.0 00:04:57.746 SO libspdk_bdev_lvol.so.6.0 00:04:57.746 SYMLINK libspdk_bdev_lvol.so 00:04:57.746 SYMLINK libspdk_bdev_virtio.so 00:04:58.317 LIB libspdk_bdev_raid.a 00:04:58.317 SO libspdk_bdev_raid.so.6.0 00:04:58.317 SYMLINK libspdk_bdev_raid.so 00:04:59.693 LIB libspdk_bdev_nvme.a 00:04:59.693 SO libspdk_bdev_nvme.so.7.1 00:04:59.693 SYMLINK libspdk_bdev_nvme.so 00:04:59.950 CC module/event/subsystems/sock/sock.o 00:04:59.950 CC module/event/subsystems/iobuf/iobuf.o 00:04:59.950 CC module/event/subsystems/fsdev/fsdev.o 00:04:59.950 CC module/event/subsystems/vmd/vmd.o 00:04:59.950 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:59.950 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:59.950 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:59.950 CC module/event/subsystems/keyring/keyring.o 00:04:59.950 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:59.950 CC module/event/subsystems/scheduler/scheduler.o 00:05:00.207 LIB libspdk_event_keyring.a 00:05:00.207 LIB libspdk_event_vhost_blk.a 00:05:00.207 LIB libspdk_event_fsdev.a 00:05:00.207 LIB libspdk_event_scheduler.a 00:05:00.207 LIB libspdk_event_vfu_tgt.a 00:05:00.207 LIB libspdk_event_vmd.a 00:05:00.207 LIB libspdk_event_sock.a 00:05:00.207 SO libspdk_event_keyring.so.1.0 00:05:00.207 SO libspdk_event_vhost_blk.so.3.0 00:05:00.207 LIB libspdk_event_iobuf.a 00:05:00.207 SO libspdk_event_fsdev.so.1.0 00:05:00.207 SO libspdk_event_vfu_tgt.so.3.0 00:05:00.207 SO libspdk_event_scheduler.so.4.0 00:05:00.207 SO libspdk_event_sock.so.5.0 00:05:00.207 SO libspdk_event_vmd.so.6.0 00:05:00.207 SO libspdk_event_iobuf.so.3.0 00:05:00.207 SYMLINK libspdk_event_keyring.so 00:05:00.207 SYMLINK libspdk_event_vhost_blk.so 00:05:00.207 SYMLINK libspdk_event_fsdev.so 00:05:00.207 SYMLINK libspdk_event_scheduler.so 00:05:00.207 SYMLINK libspdk_event_vfu_tgt.so 00:05:00.207 SYMLINK libspdk_event_sock.so 00:05:00.207 SYMLINK libspdk_event_vmd.so 00:05:00.207 SYMLINK libspdk_event_iobuf.so 00:05:00.466 CC module/event/subsystems/accel/accel.o 00:05:00.725 LIB libspdk_event_accel.a 00:05:00.725 SO libspdk_event_accel.so.6.0 00:05:00.725 SYMLINK libspdk_event_accel.so 00:05:00.984 CC module/event/subsystems/bdev/bdev.o 00:05:00.984 LIB libspdk_event_bdev.a 00:05:00.984 SO libspdk_event_bdev.so.6.0 00:05:01.243 SYMLINK libspdk_event_bdev.so 00:05:01.243 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:01.243 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:01.243 CC module/event/subsystems/ublk/ublk.o 00:05:01.243 CC module/event/subsystems/nbd/nbd.o 00:05:01.243 CC module/event/subsystems/scsi/scsi.o 00:05:01.502 LIB libspdk_event_nbd.a 00:05:01.502 LIB libspdk_event_ublk.a 00:05:01.502 LIB libspdk_event_scsi.a 00:05:01.502 SO libspdk_event_nbd.so.6.0 00:05:01.502 SO libspdk_event_ublk.so.3.0 00:05:01.502 SO libspdk_event_scsi.so.6.0 00:05:01.502 SYMLINK libspdk_event_ublk.so 00:05:01.502 SYMLINK libspdk_event_nbd.so 00:05:01.502 SYMLINK libspdk_event_scsi.so 00:05:01.502 LIB libspdk_event_nvmf.a 00:05:01.502 SO libspdk_event_nvmf.so.6.0 00:05:01.761 SYMLINK libspdk_event_nvmf.so 00:05:01.761 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:01.761 CC module/event/subsystems/iscsi/iscsi.o 00:05:01.761 LIB libspdk_event_vhost_scsi.a 00:05:01.761 SO libspdk_event_vhost_scsi.so.3.0 00:05:02.019 LIB libspdk_event_iscsi.a 00:05:02.019 SO libspdk_event_iscsi.so.6.0 00:05:02.019 SYMLINK libspdk_event_vhost_scsi.so 00:05:02.019 SYMLINK libspdk_event_iscsi.so 00:05:02.019 SO libspdk.so.6.0 00:05:02.019 SYMLINK libspdk.so 00:05:02.284 CXX app/trace/trace.o 00:05:02.284 CC app/spdk_nvme_identify/identify.o 00:05:02.284 CC app/trace_record/trace_record.o 00:05:02.284 CC app/spdk_lspci/spdk_lspci.o 00:05:02.284 CC test/rpc_client/rpc_client_test.o 00:05:02.284 CC app/spdk_top/spdk_top.o 00:05:02.284 CC app/spdk_nvme_perf/perf.o 00:05:02.284 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.284 TEST_HEADER include/spdk/accel.h 00:05:02.284 TEST_HEADER include/spdk/accel_module.h 00:05:02.284 TEST_HEADER include/spdk/assert.h 00:05:02.284 TEST_HEADER include/spdk/barrier.h 00:05:02.284 TEST_HEADER include/spdk/base64.h 00:05:02.284 TEST_HEADER include/spdk/bdev.h 00:05:02.284 TEST_HEADER include/spdk/bdev_module.h 00:05:02.284 TEST_HEADER include/spdk/bdev_zone.h 00:05:02.284 TEST_HEADER include/spdk/bit_array.h 00:05:02.284 TEST_HEADER include/spdk/bit_pool.h 00:05:02.284 TEST_HEADER include/spdk/blob_bdev.h 00:05:02.284 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:02.284 TEST_HEADER include/spdk/blobfs.h 00:05:02.284 TEST_HEADER include/spdk/conf.h 00:05:02.284 TEST_HEADER include/spdk/blob.h 00:05:02.284 TEST_HEADER include/spdk/config.h 00:05:02.284 TEST_HEADER include/spdk/cpuset.h 00:05:02.284 TEST_HEADER include/spdk/crc16.h 00:05:02.284 TEST_HEADER include/spdk/crc32.h 00:05:02.284 TEST_HEADER include/spdk/crc64.h 00:05:02.284 TEST_HEADER include/spdk/dif.h 00:05:02.284 TEST_HEADER include/spdk/dma.h 00:05:02.284 TEST_HEADER include/spdk/endian.h 00:05:02.284 TEST_HEADER include/spdk/env_dpdk.h 00:05:02.284 TEST_HEADER include/spdk/env.h 00:05:02.284 TEST_HEADER include/spdk/event.h 00:05:02.284 TEST_HEADER include/spdk/fd.h 00:05:02.284 TEST_HEADER include/spdk/fd_group.h 00:05:02.284 TEST_HEADER include/spdk/file.h 00:05:02.284 TEST_HEADER include/spdk/fsdev.h 00:05:02.284 TEST_HEADER include/spdk/fsdev_module.h 00:05:02.284 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:02.284 TEST_HEADER include/spdk/ftl.h 00:05:02.284 TEST_HEADER include/spdk/gpt_spec.h 00:05:02.284 TEST_HEADER include/spdk/hexlify.h 00:05:02.284 TEST_HEADER include/spdk/histogram_data.h 00:05:02.284 TEST_HEADER include/spdk/idxd.h 00:05:02.284 TEST_HEADER include/spdk/idxd_spec.h 00:05:02.284 TEST_HEADER include/spdk/init.h 00:05:02.284 TEST_HEADER include/spdk/ioat.h 00:05:02.284 TEST_HEADER include/spdk/iscsi_spec.h 00:05:02.284 TEST_HEADER include/spdk/ioat_spec.h 00:05:02.284 TEST_HEADER include/spdk/json.h 00:05:02.284 TEST_HEADER include/spdk/jsonrpc.h 00:05:02.284 TEST_HEADER include/spdk/keyring_module.h 00:05:02.284 TEST_HEADER include/spdk/keyring.h 00:05:02.284 TEST_HEADER include/spdk/likely.h 00:05:02.284 TEST_HEADER include/spdk/log.h 00:05:02.284 TEST_HEADER include/spdk/lvol.h 00:05:02.284 TEST_HEADER include/spdk/md5.h 00:05:02.284 TEST_HEADER include/spdk/memory.h 00:05:02.284 TEST_HEADER include/spdk/mmio.h 00:05:02.284 TEST_HEADER include/spdk/nbd.h 00:05:02.284 TEST_HEADER include/spdk/net.h 00:05:02.284 TEST_HEADER include/spdk/notify.h 00:05:02.284 TEST_HEADER include/spdk/nvme.h 00:05:02.284 TEST_HEADER include/spdk/nvme_intel.h 00:05:02.284 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:02.284 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:02.284 TEST_HEADER include/spdk/nvme_spec.h 00:05:02.284 TEST_HEADER include/spdk/nvme_zns.h 00:05:02.284 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:02.284 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:02.284 TEST_HEADER include/spdk/nvmf.h 00:05:02.284 TEST_HEADER include/spdk/nvmf_spec.h 00:05:02.284 TEST_HEADER include/spdk/nvmf_transport.h 00:05:02.284 TEST_HEADER include/spdk/opal.h 00:05:02.284 TEST_HEADER include/spdk/pci_ids.h 00:05:02.284 TEST_HEADER include/spdk/opal_spec.h 00:05:02.284 TEST_HEADER include/spdk/pipe.h 00:05:02.284 TEST_HEADER include/spdk/queue.h 00:05:02.284 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:02.284 TEST_HEADER include/spdk/reduce.h 00:05:02.284 TEST_HEADER include/spdk/rpc.h 00:05:02.285 TEST_HEADER include/spdk/scheduler.h 00:05:02.285 TEST_HEADER include/spdk/scsi.h 00:05:02.285 TEST_HEADER include/spdk/sock.h 00:05:02.285 TEST_HEADER include/spdk/scsi_spec.h 00:05:02.285 TEST_HEADER include/spdk/stdinc.h 00:05:02.285 TEST_HEADER include/spdk/string.h 00:05:02.285 TEST_HEADER include/spdk/thread.h 00:05:02.285 TEST_HEADER include/spdk/trace.h 00:05:02.285 TEST_HEADER include/spdk/trace_parser.h 00:05:02.285 CC app/spdk_dd/spdk_dd.o 00:05:02.285 TEST_HEADER include/spdk/tree.h 00:05:02.285 TEST_HEADER include/spdk/ublk.h 00:05:02.285 TEST_HEADER include/spdk/util.h 00:05:02.285 TEST_HEADER include/spdk/uuid.h 00:05:02.285 TEST_HEADER include/spdk/version.h 00:05:02.285 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:02.285 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:02.285 TEST_HEADER include/spdk/vhost.h 00:05:02.285 TEST_HEADER include/spdk/vmd.h 00:05:02.285 TEST_HEADER include/spdk/xor.h 00:05:02.285 TEST_HEADER include/spdk/zipf.h 00:05:02.285 CXX test/cpp_headers/accel.o 00:05:02.285 CXX test/cpp_headers/accel_module.o 00:05:02.285 CXX test/cpp_headers/assert.o 00:05:02.285 CXX test/cpp_headers/barrier.o 00:05:02.285 CXX test/cpp_headers/base64.o 00:05:02.285 CXX test/cpp_headers/bdev.o 00:05:02.285 CXX test/cpp_headers/bdev_module.o 00:05:02.285 CXX test/cpp_headers/bdev_zone.o 00:05:02.285 CXX test/cpp_headers/bit_array.o 00:05:02.285 CXX test/cpp_headers/bit_pool.o 00:05:02.285 CXX test/cpp_headers/blob_bdev.o 00:05:02.285 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.285 CXX test/cpp_headers/blobfs.o 00:05:02.285 CXX test/cpp_headers/blob.o 00:05:02.285 CXX test/cpp_headers/conf.o 00:05:02.285 CXX test/cpp_headers/config.o 00:05:02.285 CXX test/cpp_headers/cpuset.o 00:05:02.285 CXX test/cpp_headers/crc16.o 00:05:02.285 CC app/iscsi_tgt/iscsi_tgt.o 00:05:02.285 CC app/nvmf_tgt/nvmf_main.o 00:05:02.285 CXX test/cpp_headers/crc32.o 00:05:02.285 CC examples/util/zipf/zipf.o 00:05:02.285 CC app/spdk_tgt/spdk_tgt.o 00:05:02.285 CC test/thread/poller_perf/poller_perf.o 00:05:02.285 CC test/app/jsoncat/jsoncat.o 00:05:02.285 CC test/app/stub/stub.o 00:05:02.285 CC test/env/vtophys/vtophys.o 00:05:02.285 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:02.285 CC examples/ioat/perf/perf.o 00:05:02.551 CC examples/ioat/verify/verify.o 00:05:02.551 CC test/app/histogram_perf/histogram_perf.o 00:05:02.551 CC test/env/memory/memory_ut.o 00:05:02.551 CC test/env/pci/pci_ut.o 00:05:02.551 CC app/fio/nvme/fio_plugin.o 00:05:02.551 CC test/dma/test_dma/test_dma.o 00:05:02.551 CC test/app/bdev_svc/bdev_svc.o 00:05:02.551 CC app/fio/bdev/fio_plugin.o 00:05:02.551 LINK spdk_lspci 00:05:02.551 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:02.551 CC test/env/mem_callbacks/mem_callbacks.o 00:05:02.821 LINK rpc_client_test 00:05:02.821 LINK spdk_nvme_discover 00:05:02.821 LINK interrupt_tgt 00:05:02.821 LINK jsoncat 00:05:02.821 LINK poller_perf 00:05:02.821 LINK zipf 00:05:02.821 LINK histogram_perf 00:05:02.821 LINK vtophys 00:05:02.821 LINK nvmf_tgt 00:05:02.821 CXX test/cpp_headers/crc64.o 00:05:02.821 CXX test/cpp_headers/dif.o 00:05:02.821 CXX test/cpp_headers/dma.o 00:05:02.821 CXX test/cpp_headers/endian.o 00:05:02.821 CXX test/cpp_headers/env_dpdk.o 00:05:02.821 LINK spdk_trace_record 00:05:02.821 CXX test/cpp_headers/env.o 00:05:02.821 CXX test/cpp_headers/event.o 00:05:02.821 LINK env_dpdk_post_init 00:05:02.821 CXX test/cpp_headers/fd_group.o 00:05:02.821 CXX test/cpp_headers/fd.o 00:05:02.821 CXX test/cpp_headers/file.o 00:05:02.821 LINK stub 00:05:02.821 LINK iscsi_tgt 00:05:02.821 CXX test/cpp_headers/fsdev.o 00:05:02.821 CXX test/cpp_headers/fsdev_module.o 00:05:02.821 CXX test/cpp_headers/ftl.o 00:05:02.821 CXX test/cpp_headers/fuse_dispatcher.o 00:05:02.821 CXX test/cpp_headers/gpt_spec.o 00:05:02.821 CXX test/cpp_headers/hexlify.o 00:05:02.821 LINK ioat_perf 00:05:02.821 LINK bdev_svc 00:05:02.821 LINK verify 00:05:02.821 CXX test/cpp_headers/histogram_data.o 00:05:03.084 LINK spdk_tgt 00:05:03.084 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:03.084 CXX test/cpp_headers/idxd.o 00:05:03.084 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:03.084 CXX test/cpp_headers/idxd_spec.o 00:05:03.084 CXX test/cpp_headers/init.o 00:05:03.084 CXX test/cpp_headers/ioat.o 00:05:03.084 CXX test/cpp_headers/ioat_spec.o 00:05:03.084 LINK spdk_dd 00:05:03.084 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:03.084 CXX test/cpp_headers/iscsi_spec.o 00:05:03.084 CXX test/cpp_headers/json.o 00:05:03.084 LINK spdk_trace 00:05:03.084 CXX test/cpp_headers/jsonrpc.o 00:05:03.084 CXX test/cpp_headers/keyring.o 00:05:03.084 CXX test/cpp_headers/keyring_module.o 00:05:03.351 CXX test/cpp_headers/likely.o 00:05:03.351 CXX test/cpp_headers/log.o 00:05:03.351 LINK pci_ut 00:05:03.351 CXX test/cpp_headers/lvol.o 00:05:03.351 CXX test/cpp_headers/md5.o 00:05:03.351 CXX test/cpp_headers/memory.o 00:05:03.351 CXX test/cpp_headers/mmio.o 00:05:03.351 CXX test/cpp_headers/nbd.o 00:05:03.351 CXX test/cpp_headers/net.o 00:05:03.351 CXX test/cpp_headers/notify.o 00:05:03.351 CXX test/cpp_headers/nvme.o 00:05:03.351 CXX test/cpp_headers/nvme_intel.o 00:05:03.351 CXX test/cpp_headers/nvme_ocssd.o 00:05:03.351 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:03.351 CXX test/cpp_headers/nvme_spec.o 00:05:03.351 CXX test/cpp_headers/nvme_zns.o 00:05:03.351 CC test/event/reactor/reactor.o 00:05:03.351 CXX test/cpp_headers/nvmf_cmd.o 00:05:03.351 CC test/event/reactor_perf/reactor_perf.o 00:05:03.351 CC test/event/event_perf/event_perf.o 00:05:03.351 LINK nvme_fuzz 00:05:03.351 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:03.351 CXX test/cpp_headers/nvmf.o 00:05:03.351 CXX test/cpp_headers/nvmf_spec.o 00:05:03.351 CC examples/sock/hello_world/hello_sock.o 00:05:03.351 CC test/event/app_repeat/app_repeat.o 00:05:03.351 CC examples/thread/thread/thread_ex.o 00:05:03.351 CXX test/cpp_headers/nvmf_transport.o 00:05:03.616 CXX test/cpp_headers/opal.o 00:05:03.616 CC examples/vmd/led/led.o 00:05:03.616 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.616 CC test/event/scheduler/scheduler.o 00:05:03.616 CC examples/idxd/perf/perf.o 00:05:03.616 LINK test_dma 00:05:03.616 CXX test/cpp_headers/opal_spec.o 00:05:03.616 CXX test/cpp_headers/pci_ids.o 00:05:03.616 CXX test/cpp_headers/pipe.o 00:05:03.616 CXX test/cpp_headers/queue.o 00:05:03.616 CXX test/cpp_headers/reduce.o 00:05:03.616 CXX test/cpp_headers/rpc.o 00:05:03.616 CXX test/cpp_headers/scheduler.o 00:05:03.616 CXX test/cpp_headers/scsi.o 00:05:03.616 CXX test/cpp_headers/scsi_spec.o 00:05:03.616 CXX test/cpp_headers/sock.o 00:05:03.616 CXX test/cpp_headers/stdinc.o 00:05:03.616 CXX test/cpp_headers/string.o 00:05:03.616 LINK reactor 00:05:03.616 CXX test/cpp_headers/thread.o 00:05:03.616 CXX test/cpp_headers/trace.o 00:05:03.616 CXX test/cpp_headers/trace_parser.o 00:05:03.616 LINK reactor_perf 00:05:03.616 LINK event_perf 00:05:03.878 CXX test/cpp_headers/tree.o 00:05:03.878 LINK spdk_bdev 00:05:03.878 CC app/vhost/vhost.o 00:05:03.878 CXX test/cpp_headers/ublk.o 00:05:03.878 CXX test/cpp_headers/util.o 00:05:03.878 CXX test/cpp_headers/uuid.o 00:05:03.878 CXX test/cpp_headers/version.o 00:05:03.878 LINK app_repeat 00:05:03.878 CXX test/cpp_headers/vfio_user_pci.o 00:05:03.878 LINK lsvmd 00:05:03.878 LINK led 00:05:03.878 LINK mem_callbacks 00:05:03.878 CXX test/cpp_headers/vfio_user_spec.o 00:05:03.878 LINK spdk_nvme_perf 00:05:03.878 CXX test/cpp_headers/vhost.o 00:05:03.878 LINK spdk_nvme 00:05:03.878 CXX test/cpp_headers/vmd.o 00:05:03.878 CXX test/cpp_headers/xor.o 00:05:03.878 CXX test/cpp_headers/zipf.o 00:05:03.878 LINK vhost_fuzz 00:05:03.878 LINK spdk_nvme_identify 00:05:03.878 LINK hello_sock 00:05:03.878 LINK scheduler 00:05:03.878 LINK spdk_top 00:05:03.878 LINK thread 00:05:04.136 LINK vhost 00:05:04.136 CC test/nvme/reset/reset.o 00:05:04.136 CC test/nvme/reserve/reserve.o 00:05:04.136 CC test/nvme/simple_copy/simple_copy.o 00:05:04.136 CC test/nvme/connect_stress/connect_stress.o 00:05:04.136 CC test/nvme/fdp/fdp.o 00:05:04.136 CC test/nvme/cuse/cuse.o 00:05:04.136 CC test/nvme/startup/startup.o 00:05:04.136 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.136 CC test/nvme/boot_partition/boot_partition.o 00:05:04.136 CC test/nvme/err_injection/err_injection.o 00:05:04.136 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:04.136 CC test/nvme/sgl/sgl.o 00:05:04.136 CC test/nvme/aer/aer.o 00:05:04.136 CC test/nvme/overhead/overhead.o 00:05:04.136 CC test/nvme/e2edp/nvme_dp.o 00:05:04.136 CC test/nvme/compliance/nvme_compliance.o 00:05:04.136 LINK idxd_perf 00:05:04.136 CC test/blobfs/mkfs/mkfs.o 00:05:04.136 CC test/accel/dif/dif.o 00:05:04.396 CC test/lvol/esnap/esnap.o 00:05:04.396 CC examples/nvme/hotplug/hotplug.o 00:05:04.396 CC examples/nvme/arbitration/arbitration.o 00:05:04.396 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:04.396 CC examples/nvme/hello_world/hello_world.o 00:05:04.396 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:04.396 CC examples/nvme/reconnect/reconnect.o 00:05:04.396 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:04.396 CC examples/nvme/abort/abort.o 00:05:04.396 LINK boot_partition 00:05:04.396 LINK connect_stress 00:05:04.396 LINK doorbell_aers 00:05:04.396 LINK reserve 00:05:04.396 LINK fused_ordering 00:05:04.656 LINK startup 00:05:04.656 CC examples/accel/perf/accel_perf.o 00:05:04.656 CC examples/blob/cli/blobcli.o 00:05:04.656 LINK reset 00:05:04.656 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:04.656 LINK err_injection 00:05:04.656 LINK mkfs 00:05:04.656 CC examples/blob/hello_world/hello_blob.o 00:05:04.656 LINK overhead 00:05:04.656 LINK simple_copy 00:05:04.656 LINK pmr_persistence 00:05:04.656 LINK fdp 00:05:04.656 LINK aer 00:05:04.656 LINK memory_ut 00:05:04.656 LINK cmb_copy 00:05:04.656 LINK sgl 00:05:04.656 LINK nvme_dp 00:05:04.656 LINK hello_world 00:05:04.656 LINK hotplug 00:05:04.656 LINK nvme_compliance 00:05:04.916 LINK arbitration 00:05:04.916 LINK hello_blob 00:05:04.916 LINK reconnect 00:05:04.916 LINK hello_fsdev 00:05:05.174 LINK abort 00:05:05.174 LINK nvme_manage 00:05:05.174 LINK blobcli 00:05:05.174 LINK dif 00:05:05.174 LINK accel_perf 00:05:05.434 LINK iscsi_fuzz 00:05:05.434 CC test/bdev/bdevio/bdevio.o 00:05:05.693 CC examples/bdev/hello_world/hello_bdev.o 00:05:05.693 CC examples/bdev/bdevperf/bdevperf.o 00:05:05.951 LINK hello_bdev 00:05:05.951 LINK cuse 00:05:05.951 LINK bdevio 00:05:06.521 LINK bdevperf 00:05:06.780 CC examples/nvmf/nvmf/nvmf.o 00:05:07.038 LINK nvmf 00:05:09.581 LINK esnap 00:05:09.839 00:05:09.839 real 1m7.638s 00:05:09.839 user 9m4.179s 00:05:09.839 sys 1m59.077s 00:05:09.839 00:32:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:09.839 00:32:25 make -- common/autotest_common.sh@10 -- $ set +x 00:05:09.839 ************************************ 00:05:09.839 END TEST make 00:05:09.839 ************************************ 00:05:09.839 00:32:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:09.839 00:32:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:09.839 00:32:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:09.839 00:32:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.839 00:32:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:09.839 00:32:25 -- pm/common@44 -- $ pid=16307 00:05:09.839 00:32:25 -- pm/common@50 -- $ kill -TERM 16307 00:05:09.839 00:32:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.839 00:32:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:09.839 00:32:25 -- pm/common@44 -- $ pid=16309 00:05:09.839 00:32:25 -- pm/common@50 -- $ kill -TERM 16309 00:05:09.839 00:32:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.839 00:32:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:09.839 00:32:25 -- pm/common@44 -- $ pid=16311 00:05:09.839 00:32:25 -- pm/common@50 -- $ kill -TERM 16311 00:05:09.839 00:32:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.839 00:32:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:09.839 00:32:25 -- pm/common@44 -- $ pid=16342 00:05:09.839 00:32:25 -- pm/common@50 -- $ sudo -E kill -TERM 16342 00:05:09.839 00:32:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:09.839 00:32:25 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:09.839 00:32:25 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.839 00:32:25 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.839 00:32:25 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.098 00:32:25 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.098 00:32:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.098 00:32:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.098 00:32:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.098 00:32:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.098 00:32:25 -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.098 00:32:25 -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.098 00:32:25 -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.098 00:32:25 -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.098 00:32:25 -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.098 00:32:25 -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.098 00:32:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.098 00:32:25 -- scripts/common.sh@344 -- # case "$op" in 00:05:10.098 00:32:25 -- scripts/common.sh@345 -- # : 1 00:05:10.098 00:32:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.098 00:32:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.098 00:32:25 -- scripts/common.sh@365 -- # decimal 1 00:05:10.098 00:32:25 -- scripts/common.sh@353 -- # local d=1 00:05:10.098 00:32:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.098 00:32:25 -- scripts/common.sh@355 -- # echo 1 00:05:10.098 00:32:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.098 00:32:26 -- scripts/common.sh@366 -- # decimal 2 00:05:10.098 00:32:26 -- scripts/common.sh@353 -- # local d=2 00:05:10.098 00:32:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.098 00:32:26 -- scripts/common.sh@355 -- # echo 2 00:05:10.098 00:32:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.098 00:32:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.098 00:32:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.098 00:32:26 -- scripts/common.sh@368 -- # return 0 00:05:10.098 00:32:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.098 00:32:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.098 --rc genhtml_branch_coverage=1 00:05:10.098 --rc genhtml_function_coverage=1 00:05:10.098 --rc genhtml_legend=1 00:05:10.098 --rc geninfo_all_blocks=1 00:05:10.098 --rc geninfo_unexecuted_blocks=1 00:05:10.098 00:05:10.098 ' 00:05:10.098 00:32:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.098 --rc genhtml_branch_coverage=1 00:05:10.098 --rc genhtml_function_coverage=1 00:05:10.098 --rc genhtml_legend=1 00:05:10.098 --rc geninfo_all_blocks=1 00:05:10.098 --rc geninfo_unexecuted_blocks=1 00:05:10.098 00:05:10.098 ' 00:05:10.098 00:32:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.098 --rc genhtml_branch_coverage=1 00:05:10.098 --rc genhtml_function_coverage=1 00:05:10.098 --rc genhtml_legend=1 00:05:10.098 --rc geninfo_all_blocks=1 00:05:10.098 --rc geninfo_unexecuted_blocks=1 00:05:10.098 00:05:10.098 ' 00:05:10.098 00:32:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.098 --rc genhtml_branch_coverage=1 00:05:10.098 --rc genhtml_function_coverage=1 00:05:10.098 --rc genhtml_legend=1 00:05:10.098 --rc geninfo_all_blocks=1 00:05:10.098 --rc geninfo_unexecuted_blocks=1 00:05:10.098 00:05:10.098 ' 00:05:10.098 00:32:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.098 00:32:26 -- nvmf/common.sh@7 -- # uname -s 00:05:10.098 00:32:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.098 00:32:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.098 00:32:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.098 00:32:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.098 00:32:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.098 00:32:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.098 00:32:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.098 00:32:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.098 00:32:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.098 00:32:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.098 00:32:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.098 00:32:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:10.098 00:32:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.098 00:32:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.098 00:32:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.098 00:32:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.098 00:32:26 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.098 00:32:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.098 00:32:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.098 00:32:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.098 00:32:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.098 00:32:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.098 00:32:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.098 00:32:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.098 00:32:26 -- paths/export.sh@5 -- # export PATH 00:05:10.098 00:32:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.098 00:32:26 -- nvmf/common.sh@51 -- # : 0 00:05:10.098 00:32:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.098 00:32:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.098 00:32:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.098 00:32:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.098 00:32:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.098 00:32:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.098 00:32:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.098 00:32:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.098 00:32:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.098 00:32:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:10.098 00:32:26 -- spdk/autotest.sh@32 -- # uname -s 00:05:10.098 00:32:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:10.098 00:32:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:10.098 00:32:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:10.098 00:32:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:10.098 00:32:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:10.098 00:32:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:10.098 00:32:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:10.098 00:32:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:10.098 00:32:26 -- spdk/autotest.sh@48 -- # udevadm_pid=97940 00:05:10.098 00:32:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:10.098 00:32:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:10.098 00:32:26 -- pm/common@17 -- # local monitor 00:05:10.098 00:32:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.098 00:32:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.098 00:32:26 -- pm/common@21 -- # date +%s 00:05:10.098 00:32:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.098 00:32:26 -- pm/common@21 -- # date +%s 00:05:10.098 00:32:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.098 00:32:26 -- pm/common@21 -- # date +%s 00:05:10.098 00:32:26 -- pm/common@25 -- # sleep 1 00:05:10.098 00:32:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733527946 00:05:10.098 00:32:26 -- pm/common@21 -- # date +%s 00:05:10.098 00:32:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733527946 00:05:10.098 00:32:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733527946 00:05:10.098 00:32:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733527946 00:05:10.098 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733527946_collect-cpu-load.pm.log 00:05:10.098 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733527946_collect-vmstat.pm.log 00:05:10.098 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733527946_collect-cpu-temp.pm.log 00:05:10.098 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733527946_collect-bmc-pm.bmc.pm.log 00:05:11.037 00:32:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.037 00:32:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.037 00:32:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.037 00:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:11.037 00:32:27 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.037 00:32:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:11.037 00:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:11.037 00:32:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:11.037 00:32:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.037 00:32:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.037 00:32:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:11.037 00:32:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.037 00:32:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:11.037 00:32:27 -- common/autotest_common.sh@1457 -- # uname 00:05:11.037 00:32:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:11.037 00:32:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:11.037 00:32:27 -- common/autotest_common.sh@1477 -- # uname 00:05:11.037 00:32:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:11.037 00:32:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:11.037 00:32:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:11.295 lcov: LCOV version 1.15 00:05:11.295 00:32:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:33.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:33.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:51.303 00:33:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:51.303 00:33:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.303 00:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 00:33:04 -- spdk/autotest.sh@78 -- # rm -f 00:05:51.303 00:33:04 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:51.303 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:51.303 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:51.303 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:51.303 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:51.303 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:51.303 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:51.303 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:51.303 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:51.303 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:51.303 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:51.303 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:51.303 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:51.303 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:51.303 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:51.303 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:51.303 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:51.303 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:51.303 00:33:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:51.303 00:33:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:51.303 00:33:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:51.303 00:33:06 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:51.303 00:33:06 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:51.303 00:33:06 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:51.303 00:33:06 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:51.303 00:33:06 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:05:51.303 00:33:06 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:51.303 00:33:06 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:51.303 00:33:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:51.303 00:33:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:51.303 00:33:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:51.303 00:33:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:51.303 00:33:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:51.303 00:33:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:51.303 00:33:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:51.303 00:33:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:51.303 00:33:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:51.303 No valid GPT data, bailing 00:05:51.303 00:33:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:51.303 00:33:06 -- scripts/common.sh@394 -- # pt= 00:05:51.303 00:33:06 -- scripts/common.sh@395 -- # return 1 00:05:51.303 00:33:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:51.303 1+0 records in 00:05:51.303 1+0 records out 00:05:51.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177412 s, 591 MB/s 00:05:51.303 00:33:06 -- spdk/autotest.sh@105 -- # sync 00:05:51.303 00:33:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:51.303 00:33:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:51.303 00:33:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:52.679 00:33:08 -- spdk/autotest.sh@111 -- # uname -s 00:05:52.679 00:33:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:52.679 00:33:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:52.679 00:33:08 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:54.061 Hugepages 00:05:54.061 node hugesize free / total 00:05:54.061 node0 1048576kB 0 / 0 00:05:54.061 node0 2048kB 0 / 0 00:05:54.061 node1 1048576kB 0 / 0 00:05:54.061 node1 2048kB 0 / 0 00:05:54.061 00:05:54.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.061 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:54.061 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:54.061 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:54.061 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:54.061 00:33:10 -- spdk/autotest.sh@117 -- # uname -s 00:05:54.061 00:33:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:54.061 00:33:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:54.061 00:33:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.444 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.444 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.444 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:56.385 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.645 00:33:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:57.587 00:33:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:57.587 00:33:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:57.587 00:33:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:57.587 00:33:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:57.587 00:33:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:57.587 00:33:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:57.587 00:33:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.587 00:33:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:57.587 00:33:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:57.587 00:33:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:57.587 00:33:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:57.587 00:33:13 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:58.966 Waiting for block devices as requested 00:05:58.966 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:58.966 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:58.966 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:59.225 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:59.225 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:59.225 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:59.225 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:59.485 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:59.485 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:59.485 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:59.485 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:59.744 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:59.744 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:59.744 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:00.003 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:00.003 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:00.003 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:00.264 00:33:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:00.264 00:33:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:06:00.264 00:33:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:06:00.264 00:33:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:00.264 00:33:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:00.264 00:33:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:00.264 00:33:16 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:00.264 00:33:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:00.264 00:33:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:00.264 00:33:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:00.264 00:33:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:00.264 00:33:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:00.264 00:33:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:00.264 00:33:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:00.264 00:33:16 -- common/autotest_common.sh@1543 -- # continue 00:06:00.264 00:33:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:00.264 00:33:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.264 00:33:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.264 00:33:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:00.264 00:33:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.264 00:33:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.264 00:33:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.642 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:01.642 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:01.642 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:02.583 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:02.583 00:33:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:02.583 00:33:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.583 00:33:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.583 00:33:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:02.583 00:33:18 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:02.583 00:33:18 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:02.583 00:33:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:02.583 00:33:18 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:02.583 00:33:18 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:02.583 00:33:18 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:02.583 00:33:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:02.583 00:33:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:02.583 00:33:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:02.583 00:33:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.583 00:33:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:02.583 00:33:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:02.841 00:33:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:02.841 00:33:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:06:02.841 00:33:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:02.841 00:33:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:06:02.841 00:33:18 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:02.841 00:33:18 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:02.841 00:33:18 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:02.841 00:33:18 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:02.841 00:33:18 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:06:02.841 00:33:18 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:06:02.841 00:33:18 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=109327 00:06:02.841 00:33:18 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.841 00:33:18 -- common/autotest_common.sh@1585 -- # waitforlisten 109327 00:06:02.841 00:33:18 -- common/autotest_common.sh@835 -- # '[' -z 109327 ']' 00:06:02.841 00:33:18 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.842 00:33:18 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.842 00:33:18 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.842 00:33:18 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.842 00:33:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.842 [2024-12-07 00:33:18.874160] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:02.842 [2024-12-07 00:33:18.874237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109327 ] 00:06:02.842 [2024-12-07 00:33:18.941855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.842 [2024-12-07 00:33:18.990274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.101 00:33:19 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.101 00:33:19 -- common/autotest_common.sh@868 -- # return 0 00:06:03.101 00:33:19 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:03.101 00:33:19 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:03.101 00:33:19 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:06.393 nvme0n1 00:06:06.393 00:33:22 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:06.652 [2024-12-07 00:33:22.583540] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:06.652 [2024-12-07 00:33:22.583587] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:06.652 request: 00:06:06.652 { 00:06:06.652 "nvme_ctrlr_name": "nvme0", 00:06:06.652 "password": "test", 00:06:06.652 "method": "bdev_nvme_opal_revert", 00:06:06.652 "req_id": 1 00:06:06.652 } 00:06:06.652 Got JSON-RPC error response 00:06:06.652 response: 00:06:06.652 { 00:06:06.652 "code": -32603, 00:06:06.652 "message": "Internal error" 00:06:06.652 } 00:06:06.652 00:33:22 -- common/autotest_common.sh@1591 -- # true 00:06:06.652 00:33:22 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:06.652 00:33:22 -- common/autotest_common.sh@1595 -- # killprocess 109327 00:06:06.652 00:33:22 -- common/autotest_common.sh@954 -- # '[' -z 109327 ']' 00:06:06.652 00:33:22 -- common/autotest_common.sh@958 -- # kill -0 109327 00:06:06.652 00:33:22 -- common/autotest_common.sh@959 -- # uname 00:06:06.652 00:33:22 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.652 00:33:22 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109327 00:06:06.652 00:33:22 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.652 00:33:22 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.652 00:33:22 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109327' 00:06:06.652 killing process with pid 109327 00:06:06.652 00:33:22 -- common/autotest_common.sh@973 -- # kill 109327 00:06:06.652 00:33:22 -- common/autotest_common.sh@978 -- # wait 109327 00:06:08.555 00:33:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:08.555 00:33:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:08.555 00:33:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.555 00:33:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.555 00:33:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:08.555 00:33:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.555 00:33:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.555 00:33:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:08.555 00:33:24 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:08.555 00:33:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.555 00:33:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.555 00:33:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.555 ************************************ 00:06:08.555 START TEST env 00:06:08.555 ************************************ 00:06:08.555 00:33:24 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:08.555 * Looking for test storage... 00:06:08.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:08.555 00:33:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.555 00:33:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.555 00:33:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.555 00:33:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.555 00:33:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.555 00:33:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.556 00:33:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.556 00:33:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.556 00:33:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.556 00:33:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.556 00:33:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.556 00:33:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.556 00:33:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.556 00:33:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.556 00:33:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.556 00:33:24 env -- scripts/common.sh@344 -- # case "$op" in 00:06:08.556 00:33:24 env -- scripts/common.sh@345 -- # : 1 00:06:08.556 00:33:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.556 00:33:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.556 00:33:24 env -- scripts/common.sh@365 -- # decimal 1 00:06:08.556 00:33:24 env -- scripts/common.sh@353 -- # local d=1 00:06:08.556 00:33:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.556 00:33:24 env -- scripts/common.sh@355 -- # echo 1 00:06:08.556 00:33:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.556 00:33:24 env -- scripts/common.sh@366 -- # decimal 2 00:06:08.556 00:33:24 env -- scripts/common.sh@353 -- # local d=2 00:06:08.556 00:33:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.556 00:33:24 env -- scripts/common.sh@355 -- # echo 2 00:06:08.556 00:33:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.556 00:33:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.556 00:33:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.556 00:33:24 env -- scripts/common.sh@368 -- # return 0 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.556 --rc genhtml_branch_coverage=1 00:06:08.556 --rc genhtml_function_coverage=1 00:06:08.556 --rc genhtml_legend=1 00:06:08.556 --rc geninfo_all_blocks=1 00:06:08.556 --rc geninfo_unexecuted_blocks=1 00:06:08.556 00:06:08.556 ' 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.556 --rc genhtml_branch_coverage=1 00:06:08.556 --rc genhtml_function_coverage=1 00:06:08.556 --rc genhtml_legend=1 00:06:08.556 --rc geninfo_all_blocks=1 00:06:08.556 --rc geninfo_unexecuted_blocks=1 00:06:08.556 00:06:08.556 ' 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.556 --rc genhtml_branch_coverage=1 00:06:08.556 --rc genhtml_function_coverage=1 00:06:08.556 --rc genhtml_legend=1 00:06:08.556 --rc geninfo_all_blocks=1 00:06:08.556 --rc geninfo_unexecuted_blocks=1 00:06:08.556 00:06:08.556 ' 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.556 --rc genhtml_branch_coverage=1 00:06:08.556 --rc genhtml_function_coverage=1 00:06:08.556 --rc genhtml_legend=1 00:06:08.556 --rc geninfo_all_blocks=1 00:06:08.556 --rc geninfo_unexecuted_blocks=1 00:06:08.556 00:06:08.556 ' 00:06:08.556 00:33:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.556 00:33:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.556 00:33:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.556 ************************************ 00:06:08.556 START TEST env_memory 00:06:08.556 ************************************ 00:06:08.556 00:33:24 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:08.556 00:06:08.556 00:06:08.556 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.556 http://cunit.sourceforge.net/ 00:06:08.556 00:06:08.556 00:06:08.556 Suite: memory 00:06:08.556 Test: alloc and free memory map ...[2024-12-07 00:33:24.602144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:08.556 passed 00:06:08.556 Test: mem map translation ...[2024-12-07 00:33:24.621764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:08.556 [2024-12-07 00:33:24.621786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:08.556 [2024-12-07 00:33:24.621837] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:08.556 [2024-12-07 00:33:24.621849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:08.556 passed 00:06:08.556 Test: mem map registration ...[2024-12-07 00:33:24.662516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:08.556 [2024-12-07 00:33:24.662535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:08.556 passed 00:06:08.815 Test: mem map adjacent registrations ...passed 00:06:08.815 00:06:08.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.815 suites 1 1 n/a 0 0 00:06:08.815 tests 4 4 4 0 0 00:06:08.815 asserts 152 152 152 0 n/a 00:06:08.815 00:06:08.815 Elapsed time = 0.137 seconds 00:06:08.815 00:06:08.815 real 0m0.145s 00:06:08.815 user 0m0.136s 00:06:08.815 sys 0m0.008s 00:06:08.815 00:33:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.815 00:33:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:08.815 ************************************ 00:06:08.815 END TEST env_memory 00:06:08.815 ************************************ 00:06:08.815 00:33:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:08.815 00:33:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.815 00:33:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.815 00:33:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.815 ************************************ 00:06:08.815 START TEST env_vtophys 00:06:08.815 ************************************ 00:06:08.815 00:33:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:08.815 EAL: lib.eal log level changed from notice to debug 00:06:08.815 EAL: Detected lcore 0 as core 0 on socket 0 00:06:08.815 EAL: Detected lcore 1 as core 1 on socket 0 00:06:08.815 EAL: Detected lcore 2 as core 2 on socket 0 00:06:08.815 EAL: Detected lcore 3 as core 3 on socket 0 00:06:08.815 EAL: Detected lcore 4 as core 4 on socket 0 00:06:08.815 EAL: Detected lcore 5 as core 5 on socket 0 00:06:08.815 EAL: Detected lcore 6 as core 8 on socket 0 00:06:08.815 EAL: Detected lcore 7 as core 9 on socket 0 00:06:08.815 EAL: Detected lcore 8 as core 10 on socket 0 00:06:08.815 EAL: Detected lcore 9 as core 11 on socket 0 00:06:08.815 EAL: Detected lcore 10 as core 12 on socket 0 00:06:08.815 EAL: Detected lcore 11 as core 13 on socket 0 00:06:08.815 EAL: Detected lcore 12 as core 0 on socket 1 00:06:08.815 EAL: Detected lcore 13 as core 1 on socket 1 00:06:08.815 EAL: Detected lcore 14 as core 2 on socket 1 00:06:08.815 EAL: Detected lcore 15 as core 3 on socket 1 00:06:08.815 EAL: Detected lcore 16 as core 4 on socket 1 00:06:08.815 EAL: Detected lcore 17 as core 5 on socket 1 00:06:08.815 EAL: Detected lcore 18 as core 8 on socket 1 00:06:08.815 EAL: Detected lcore 19 as core 9 on socket 1 00:06:08.815 EAL: Detected lcore 20 as core 10 on socket 1 00:06:08.815 EAL: Detected lcore 21 as core 11 on socket 1 00:06:08.815 EAL: Detected lcore 22 as core 12 on socket 1 00:06:08.815 EAL: Detected lcore 23 as core 13 on socket 1 00:06:08.815 EAL: Detected lcore 24 as core 0 on socket 0 00:06:08.815 EAL: Detected lcore 25 as core 1 on socket 0 00:06:08.815 EAL: Detected lcore 26 as core 2 on socket 0 00:06:08.815 EAL: Detected lcore 27 as core 3 on socket 0 00:06:08.815 EAL: Detected lcore 28 as core 4 on socket 0 00:06:08.815 EAL: Detected lcore 29 as core 5 on socket 0 00:06:08.815 EAL: Detected lcore 30 as core 8 on socket 0 00:06:08.815 EAL: Detected lcore 31 as core 9 on socket 0 00:06:08.815 EAL: Detected lcore 32 as core 10 on socket 0 00:06:08.815 EAL: Detected lcore 33 as core 11 on socket 0 00:06:08.815 EAL: Detected lcore 34 as core 12 on socket 0 00:06:08.815 EAL: Detected lcore 35 as core 13 on socket 0 00:06:08.815 EAL: Detected lcore 36 as core 0 on socket 1 00:06:08.815 EAL: Detected lcore 37 as core 1 on socket 1 00:06:08.815 EAL: Detected lcore 38 as core 2 on socket 1 00:06:08.815 EAL: Detected lcore 39 as core 3 on socket 1 00:06:08.815 EAL: Detected lcore 40 as core 4 on socket 1 00:06:08.815 EAL: Detected lcore 41 as core 5 on socket 1 00:06:08.815 EAL: Detected lcore 42 as core 8 on socket 1 00:06:08.815 EAL: Detected lcore 43 as core 9 on socket 1 00:06:08.815 EAL: Detected lcore 44 as core 10 on socket 1 00:06:08.815 EAL: Detected lcore 45 as core 11 on socket 1 00:06:08.815 EAL: Detected lcore 46 as core 12 on socket 1 00:06:08.815 EAL: Detected lcore 47 as core 13 on socket 1 00:06:08.815 EAL: Maximum logical cores by configuration: 128 00:06:08.815 EAL: Detected CPU lcores: 48 00:06:08.815 EAL: Detected NUMA nodes: 2 00:06:08.815 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:08.815 EAL: Detected shared linkage of DPDK 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:08.815 EAL: Registered [vdev] bus. 00:06:08.815 EAL: bus.vdev log level changed from disabled to notice 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:08.815 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:08.815 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:08.815 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:08.815 EAL: No shared files mode enabled, IPC will be disabled 00:06:08.815 EAL: No shared files mode enabled, IPC is disabled 00:06:08.815 EAL: Bus pci wants IOVA as 'DC' 00:06:08.815 EAL: Bus vdev wants IOVA as 'DC' 00:06:08.815 EAL: Buses did not request a specific IOVA mode. 00:06:08.815 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:08.815 EAL: Selected IOVA mode 'VA' 00:06:08.815 EAL: Probing VFIO support... 00:06:08.815 EAL: IOMMU type 1 (Type 1) is supported 00:06:08.815 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:08.815 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:08.815 EAL: VFIO support initialized 00:06:08.815 EAL: Ask a virtual area of 0x2e000 bytes 00:06:08.815 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:08.815 EAL: Setting up physically contiguous memory... 00:06:08.815 EAL: Setting maximum number of open files to 524288 00:06:08.815 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:08.815 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:08.815 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:08.815 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.815 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:08.815 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.815 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.815 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:08.815 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:08.815 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.815 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:08.816 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:08.816 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.816 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:08.816 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.816 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.816 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:08.816 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:08.816 EAL: Hugepages will be freed exactly as allocated. 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: TSC frequency is ~2700000 KHz 00:06:08.816 EAL: Main lcore 0 is ready (tid=7fd98b5b8a00;cpuset=[0]) 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 0 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 2MB 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:08.816 EAL: Mem event callback 'spdk:(nil)' registered 00:06:08.816 00:06:08.816 00:06:08.816 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.816 http://cunit.sourceforge.net/ 00:06:08.816 00:06:08.816 00:06:08.816 Suite: components_suite 00:06:08.816 Test: vtophys_malloc_test ...passed 00:06:08.816 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 4MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 4MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 6MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 6MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 10MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 10MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 18MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 18MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 34MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 34MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 66MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was shrunk by 66MB 00:06:08.816 EAL: Trying to obtain current memory policy. 00:06:08.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.816 EAL: Restoring previous memory policy: 4 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.816 EAL: request: mp_malloc_sync 00:06:08.816 EAL: No shared files mode enabled, IPC is disabled 00:06:08.816 EAL: Heap on socket 0 was expanded by 130MB 00:06:08.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.075 EAL: request: mp_malloc_sync 00:06:09.075 EAL: No shared files mode enabled, IPC is disabled 00:06:09.075 EAL: Heap on socket 0 was shrunk by 130MB 00:06:09.075 EAL: Trying to obtain current memory policy. 00:06:09.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.075 EAL: Restoring previous memory policy: 4 00:06:09.075 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.075 EAL: request: mp_malloc_sync 00:06:09.075 EAL: No shared files mode enabled, IPC is disabled 00:06:09.075 EAL: Heap on socket 0 was expanded by 258MB 00:06:09.075 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.075 EAL: request: mp_malloc_sync 00:06:09.075 EAL: No shared files mode enabled, IPC is disabled 00:06:09.075 EAL: Heap on socket 0 was shrunk by 258MB 00:06:09.075 EAL: Trying to obtain current memory policy. 00:06:09.075 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.334 EAL: Restoring previous memory policy: 4 00:06:09.334 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.334 EAL: request: mp_malloc_sync 00:06:09.334 EAL: No shared files mode enabled, IPC is disabled 00:06:09.334 EAL: Heap on socket 0 was expanded by 514MB 00:06:09.334 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.591 EAL: request: mp_malloc_sync 00:06:09.591 EAL: No shared files mode enabled, IPC is disabled 00:06:09.591 EAL: Heap on socket 0 was shrunk by 514MB 00:06:09.591 EAL: Trying to obtain current memory policy. 00:06:09.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.848 EAL: Restoring previous memory policy: 4 00:06:09.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.848 EAL: request: mp_malloc_sync 00:06:09.848 EAL: No shared files mode enabled, IPC is disabled 00:06:09.848 EAL: Heap on socket 0 was expanded by 1026MB 00:06:09.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.107 EAL: request: mp_malloc_sync 00:06:10.107 EAL: No shared files mode enabled, IPC is disabled 00:06:10.107 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:10.107 passed 00:06:10.107 00:06:10.107 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.107 suites 1 1 n/a 0 0 00:06:10.107 tests 2 2 2 0 0 00:06:10.107 asserts 497 497 497 0 n/a 00:06:10.107 00:06:10.107 Elapsed time = 1.316 seconds 00:06:10.107 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.107 EAL: request: mp_malloc_sync 00:06:10.107 EAL: No shared files mode enabled, IPC is disabled 00:06:10.107 EAL: Heap on socket 0 was shrunk by 2MB 00:06:10.107 EAL: No shared files mode enabled, IPC is disabled 00:06:10.107 EAL: No shared files mode enabled, IPC is disabled 00:06:10.107 EAL: No shared files mode enabled, IPC is disabled 00:06:10.107 00:06:10.107 real 0m1.436s 00:06:10.107 user 0m0.835s 00:06:10.107 sys 0m0.570s 00:06:10.107 00:33:26 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.107 00:33:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:10.107 ************************************ 00:06:10.107 END TEST env_vtophys 00:06:10.107 ************************************ 00:06:10.107 00:33:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:10.107 00:33:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.107 00:33:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.107 00:33:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.107 ************************************ 00:06:10.107 START TEST env_pci 00:06:10.107 ************************************ 00:06:10.107 00:33:26 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:10.107 00:06:10.107 00:06:10.107 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.107 http://cunit.sourceforge.net/ 00:06:10.107 00:06:10.107 00:06:10.107 Suite: pci 00:06:10.108 Test: pci_hook ...[2024-12-07 00:33:26.256108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110235 has claimed it 00:06:10.367 EAL: Cannot find device (10000:00:01.0) 00:06:10.367 EAL: Failed to attach device on primary process 00:06:10.367 passed 00:06:10.367 00:06:10.367 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.367 suites 1 1 n/a 0 0 00:06:10.367 tests 1 1 1 0 0 00:06:10.367 asserts 25 25 25 0 n/a 00:06:10.367 00:06:10.368 Elapsed time = 0.022 seconds 00:06:10.368 00:06:10.368 real 0m0.034s 00:06:10.368 user 0m0.010s 00:06:10.368 sys 0m0.024s 00:06:10.368 00:33:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.368 00:33:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:10.368 ************************************ 00:06:10.368 END TEST env_pci 00:06:10.368 ************************************ 00:06:10.368 00:33:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:10.368 00:33:26 env -- env/env.sh@15 -- # uname 00:06:10.368 00:33:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:10.368 00:33:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:10.368 00:33:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.368 00:33:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:10.368 00:33:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.368 00:33:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.368 ************************************ 00:06:10.368 START TEST env_dpdk_post_init 00:06:10.368 ************************************ 00:06:10.368 00:33:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:10.368 EAL: Detected CPU lcores: 48 00:06:10.368 EAL: Detected NUMA nodes: 2 00:06:10.368 EAL: Detected shared linkage of DPDK 00:06:10.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:10.368 EAL: Selected IOVA mode 'VA' 00:06:10.368 EAL: VFIO support initialized 00:06:10.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:10.368 EAL: Using IOMMU type 1 (Type 1) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:10.368 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:10.627 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:11.562 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:14.843 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:14.843 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:14.843 Starting DPDK initialization... 00:06:14.843 Starting SPDK post initialization... 00:06:14.843 SPDK NVMe probe 00:06:14.843 Attaching to 0000:88:00.0 00:06:14.843 Attached to 0000:88:00.0 00:06:14.843 Cleaning up... 00:06:14.843 00:06:14.843 real 0m4.400s 00:06:14.843 user 0m3.295s 00:06:14.843 sys 0m0.166s 00:06:14.843 00:33:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.843 00:33:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.843 ************************************ 00:06:14.843 END TEST env_dpdk_post_init 00:06:14.843 ************************************ 00:06:14.843 00:33:30 env -- env/env.sh@26 -- # uname 00:06:14.843 00:33:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:14.843 00:33:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.843 00:33:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.843 00:33:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.843 00:33:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.843 ************************************ 00:06:14.843 START TEST env_mem_callbacks 00:06:14.843 ************************************ 00:06:14.843 00:33:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.843 EAL: Detected CPU lcores: 48 00:06:14.843 EAL: Detected NUMA nodes: 2 00:06:14.843 EAL: Detected shared linkage of DPDK 00:06:14.843 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.843 EAL: Selected IOVA mode 'VA' 00:06:14.843 EAL: VFIO support initialized 00:06:14.843 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.843 00:06:14.843 00:06:14.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.843 http://cunit.sourceforge.net/ 00:06:14.843 00:06:14.843 00:06:14.843 Suite: memory 00:06:14.843 Test: test ... 00:06:14.843 register 0x200000200000 2097152 00:06:14.843 malloc 3145728 00:06:14.843 register 0x200000400000 4194304 00:06:14.843 buf 0x200000500000 len 3145728 PASSED 00:06:14.843 malloc 64 00:06:14.843 buf 0x2000004fff40 len 64 PASSED 00:06:14.843 malloc 4194304 00:06:14.843 register 0x200000800000 6291456 00:06:14.843 buf 0x200000a00000 len 4194304 PASSED 00:06:14.843 free 0x200000500000 3145728 00:06:14.843 free 0x2000004fff40 64 00:06:14.843 unregister 0x200000400000 4194304 PASSED 00:06:14.843 free 0x200000a00000 4194304 00:06:14.843 unregister 0x200000800000 6291456 PASSED 00:06:14.843 malloc 8388608 00:06:14.843 register 0x200000400000 10485760 00:06:14.843 buf 0x200000600000 len 8388608 PASSED 00:06:14.843 free 0x200000600000 8388608 00:06:14.843 unregister 0x200000400000 10485760 PASSED 00:06:14.843 passed 00:06:14.843 00:06:14.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.843 suites 1 1 n/a 0 0 00:06:14.843 tests 1 1 1 0 0 00:06:14.843 asserts 15 15 15 0 n/a 00:06:14.843 00:06:14.843 Elapsed time = 0.005 seconds 00:06:14.843 00:06:14.843 real 0m0.048s 00:06:14.843 user 0m0.013s 00:06:14.843 sys 0m0.035s 00:06:14.843 00:33:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.843 00:33:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:14.843 ************************************ 00:06:14.843 END TEST env_mem_callbacks 00:06:14.843 ************************************ 00:06:14.843 00:06:14.843 real 0m6.454s 00:06:14.843 user 0m4.490s 00:06:14.843 sys 0m1.013s 00:06:14.843 00:33:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.843 00:33:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.843 ************************************ 00:06:14.843 END TEST env 00:06:14.843 ************************************ 00:06:14.843 00:33:30 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:14.843 00:33:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.843 00:33:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.843 00:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:14.843 ************************************ 00:06:14.843 START TEST rpc 00:06:14.843 ************************************ 00:06:14.843 00:33:30 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:14.843 * Looking for test storage... 00:06:14.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.843 00:33:30 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.843 00:33:30 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.843 00:33:30 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.103 00:33:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.103 00:33:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.103 00:33:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.103 00:33:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.103 00:33:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.103 00:33:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:15.103 00:33:31 rpc -- scripts/common.sh@345 -- # : 1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.103 00:33:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.103 00:33:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@353 -- # local d=1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.103 00:33:31 rpc -- scripts/common.sh@355 -- # echo 1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.103 00:33:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@353 -- # local d=2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.103 00:33:31 rpc -- scripts/common.sh@355 -- # echo 2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.103 00:33:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.103 00:33:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.103 00:33:31 rpc -- scripts/common.sh@368 -- # return 0 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.103 --rc genhtml_branch_coverage=1 00:06:15.103 --rc genhtml_function_coverage=1 00:06:15.103 --rc genhtml_legend=1 00:06:15.103 --rc geninfo_all_blocks=1 00:06:15.103 --rc geninfo_unexecuted_blocks=1 00:06:15.103 00:06:15.103 ' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.103 --rc genhtml_branch_coverage=1 00:06:15.103 --rc genhtml_function_coverage=1 00:06:15.103 --rc genhtml_legend=1 00:06:15.103 --rc geninfo_all_blocks=1 00:06:15.103 --rc geninfo_unexecuted_blocks=1 00:06:15.103 00:06:15.103 ' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.103 --rc genhtml_branch_coverage=1 00:06:15.103 --rc genhtml_function_coverage=1 00:06:15.103 --rc genhtml_legend=1 00:06:15.103 --rc geninfo_all_blocks=1 00:06:15.103 --rc geninfo_unexecuted_blocks=1 00:06:15.103 00:06:15.103 ' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.103 --rc genhtml_branch_coverage=1 00:06:15.103 --rc genhtml_function_coverage=1 00:06:15.103 --rc genhtml_legend=1 00:06:15.103 --rc geninfo_all_blocks=1 00:06:15.103 --rc geninfo_unexecuted_blocks=1 00:06:15.103 00:06:15.103 ' 00:06:15.103 00:33:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=110986 00:06:15.103 00:33:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:15.103 00:33:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.103 00:33:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 110986 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 110986 ']' 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.103 00:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.103 [2024-12-07 00:33:31.108970] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:15.103 [2024-12-07 00:33:31.109088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110986 ] 00:06:15.103 [2024-12-07 00:33:31.177903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.103 [2024-12-07 00:33:31.223346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:15.103 [2024-12-07 00:33:31.223401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110986' to capture a snapshot of events at runtime. 00:06:15.103 [2024-12-07 00:33:31.223430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.103 [2024-12-07 00:33:31.223441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.103 [2024-12-07 00:33:31.223451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110986 for offline analysis/debug. 00:06:15.103 [2024-12-07 00:33:31.224039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.362 00:33:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.362 00:33:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.362 00:33:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:15.362 00:33:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:15.362 00:33:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:15.362 00:33:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:15.362 00:33:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.362 00:33:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.362 00:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.362 ************************************ 00:06:15.362 START TEST rpc_integrity 00:06:15.362 ************************************ 00:06:15.362 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:15.362 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.362 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.362 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.621 { 00:06:15.621 "name": "Malloc0", 00:06:15.621 "aliases": [ 00:06:15.621 "344fa7fb-3382-43b0-80c4-0b24d4bdca1c" 00:06:15.621 ], 00:06:15.621 "product_name": "Malloc disk", 00:06:15.621 "block_size": 512, 00:06:15.621 "num_blocks": 16384, 00:06:15.621 "uuid": "344fa7fb-3382-43b0-80c4-0b24d4bdca1c", 00:06:15.621 "assigned_rate_limits": { 00:06:15.621 "rw_ios_per_sec": 0, 00:06:15.621 "rw_mbytes_per_sec": 0, 00:06:15.621 "r_mbytes_per_sec": 0, 00:06:15.621 "w_mbytes_per_sec": 0 00:06:15.621 }, 00:06:15.621 "claimed": false, 00:06:15.621 "zoned": false, 00:06:15.621 "supported_io_types": { 00:06:15.621 "read": true, 00:06:15.621 "write": true, 00:06:15.621 "unmap": true, 00:06:15.621 "flush": true, 00:06:15.621 "reset": true, 00:06:15.621 "nvme_admin": false, 00:06:15.621 "nvme_io": false, 00:06:15.621 "nvme_io_md": false, 00:06:15.621 "write_zeroes": true, 00:06:15.621 "zcopy": true, 00:06:15.621 "get_zone_info": false, 00:06:15.621 "zone_management": false, 00:06:15.621 "zone_append": false, 00:06:15.621 "compare": false, 00:06:15.621 "compare_and_write": false, 00:06:15.621 "abort": true, 00:06:15.621 "seek_hole": false, 00:06:15.621 "seek_data": false, 00:06:15.621 "copy": true, 00:06:15.621 "nvme_iov_md": false 00:06:15.621 }, 00:06:15.621 "memory_domains": [ 00:06:15.621 { 00:06:15.621 "dma_device_id": "system", 00:06:15.621 "dma_device_type": 1 00:06:15.621 }, 00:06:15.621 { 00:06:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.621 "dma_device_type": 2 00:06:15.621 } 00:06:15.621 ], 00:06:15.621 "driver_specific": {} 00:06:15.621 } 00:06:15.621 ]' 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.621 [2024-12-07 00:33:31.614053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:15.621 [2024-12-07 00:33:31.614102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.621 [2024-12-07 00:33:31.614141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1223520 00:06:15.621 [2024-12-07 00:33:31.614186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.621 [2024-12-07 00:33:31.615584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.621 [2024-12-07 00:33:31.615607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.621 Passthru0 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.621 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.621 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.621 { 00:06:15.621 "name": "Malloc0", 00:06:15.621 "aliases": [ 00:06:15.621 "344fa7fb-3382-43b0-80c4-0b24d4bdca1c" 00:06:15.621 ], 00:06:15.621 "product_name": "Malloc disk", 00:06:15.621 "block_size": 512, 00:06:15.621 "num_blocks": 16384, 00:06:15.621 "uuid": "344fa7fb-3382-43b0-80c4-0b24d4bdca1c", 00:06:15.621 "assigned_rate_limits": { 00:06:15.621 "rw_ios_per_sec": 0, 00:06:15.621 "rw_mbytes_per_sec": 0, 00:06:15.621 "r_mbytes_per_sec": 0, 00:06:15.621 "w_mbytes_per_sec": 0 00:06:15.621 }, 00:06:15.621 "claimed": true, 00:06:15.621 "claim_type": "exclusive_write", 00:06:15.621 "zoned": false, 00:06:15.621 "supported_io_types": { 00:06:15.621 "read": true, 00:06:15.621 "write": true, 00:06:15.621 "unmap": true, 00:06:15.621 "flush": true, 00:06:15.621 "reset": true, 00:06:15.621 "nvme_admin": false, 00:06:15.621 "nvme_io": false, 00:06:15.621 "nvme_io_md": false, 00:06:15.621 "write_zeroes": true, 00:06:15.621 "zcopy": true, 00:06:15.622 "get_zone_info": false, 00:06:15.622 "zone_management": false, 00:06:15.622 "zone_append": false, 00:06:15.622 "compare": false, 00:06:15.622 "compare_and_write": false, 00:06:15.622 "abort": true, 00:06:15.622 "seek_hole": false, 00:06:15.622 "seek_data": false, 00:06:15.622 "copy": true, 00:06:15.622 "nvme_iov_md": false 00:06:15.622 }, 00:06:15.622 "memory_domains": [ 00:06:15.622 { 00:06:15.622 "dma_device_id": "system", 00:06:15.622 "dma_device_type": 1 00:06:15.622 }, 00:06:15.622 { 00:06:15.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.622 "dma_device_type": 2 00:06:15.622 } 00:06:15.622 ], 00:06:15.622 "driver_specific": {} 00:06:15.622 }, 00:06:15.622 { 00:06:15.622 "name": "Passthru0", 00:06:15.622 "aliases": [ 00:06:15.622 "60b281bd-1d4c-5a87-96bf-78e3d81dee1f" 00:06:15.622 ], 00:06:15.622 "product_name": "passthru", 00:06:15.622 "block_size": 512, 00:06:15.622 "num_blocks": 16384, 00:06:15.622 "uuid": "60b281bd-1d4c-5a87-96bf-78e3d81dee1f", 00:06:15.622 "assigned_rate_limits": { 00:06:15.622 "rw_ios_per_sec": 0, 00:06:15.622 "rw_mbytes_per_sec": 0, 00:06:15.622 "r_mbytes_per_sec": 0, 00:06:15.622 "w_mbytes_per_sec": 0 00:06:15.622 }, 00:06:15.622 "claimed": false, 00:06:15.622 "zoned": false, 00:06:15.622 "supported_io_types": { 00:06:15.622 "read": true, 00:06:15.622 "write": true, 00:06:15.622 "unmap": true, 00:06:15.622 "flush": true, 00:06:15.622 "reset": true, 00:06:15.622 "nvme_admin": false, 00:06:15.622 "nvme_io": false, 00:06:15.622 "nvme_io_md": false, 00:06:15.622 "write_zeroes": true, 00:06:15.622 "zcopy": true, 00:06:15.622 "get_zone_info": false, 00:06:15.622 "zone_management": false, 00:06:15.622 "zone_append": false, 00:06:15.622 "compare": false, 00:06:15.622 "compare_and_write": false, 00:06:15.622 "abort": true, 00:06:15.622 "seek_hole": false, 00:06:15.622 "seek_data": false, 00:06:15.622 "copy": true, 00:06:15.622 "nvme_iov_md": false 00:06:15.622 }, 00:06:15.622 "memory_domains": [ 00:06:15.622 { 00:06:15.622 "dma_device_id": "system", 00:06:15.622 "dma_device_type": 1 00:06:15.622 }, 00:06:15.622 { 00:06:15.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.622 "dma_device_type": 2 00:06:15.622 } 00:06:15.622 ], 00:06:15.622 "driver_specific": { 00:06:15.622 "passthru": { 00:06:15.622 "name": "Passthru0", 00:06:15.622 "base_bdev_name": "Malloc0" 00:06:15.622 } 00:06:15.622 } 00:06:15.622 } 00:06:15.622 ]' 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.622 00:33:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.622 00:06:15.622 real 0m0.225s 00:06:15.622 user 0m0.138s 00:06:15.622 sys 0m0.027s 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.622 00:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.622 ************************************ 00:06:15.622 END TEST rpc_integrity 00:06:15.622 ************************************ 00:06:15.622 00:33:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:15.622 00:33:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.622 00:33:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.622 00:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 ************************************ 00:06:15.881 START TEST rpc_plugins 00:06:15.881 ************************************ 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:15.881 { 00:06:15.881 "name": "Malloc1", 00:06:15.881 "aliases": [ 00:06:15.881 "88125d95-3a7b-4e35-a042-d3667197dd3e" 00:06:15.881 ], 00:06:15.881 "product_name": "Malloc disk", 00:06:15.881 "block_size": 4096, 00:06:15.881 "num_blocks": 256, 00:06:15.881 "uuid": "88125d95-3a7b-4e35-a042-d3667197dd3e", 00:06:15.881 "assigned_rate_limits": { 00:06:15.881 "rw_ios_per_sec": 0, 00:06:15.881 "rw_mbytes_per_sec": 0, 00:06:15.881 "r_mbytes_per_sec": 0, 00:06:15.881 "w_mbytes_per_sec": 0 00:06:15.881 }, 00:06:15.881 "claimed": false, 00:06:15.881 "zoned": false, 00:06:15.881 "supported_io_types": { 00:06:15.881 "read": true, 00:06:15.881 "write": true, 00:06:15.881 "unmap": true, 00:06:15.881 "flush": true, 00:06:15.881 "reset": true, 00:06:15.881 "nvme_admin": false, 00:06:15.881 "nvme_io": false, 00:06:15.881 "nvme_io_md": false, 00:06:15.881 "write_zeroes": true, 00:06:15.881 "zcopy": true, 00:06:15.881 "get_zone_info": false, 00:06:15.881 "zone_management": false, 00:06:15.881 "zone_append": false, 00:06:15.881 "compare": false, 00:06:15.881 "compare_and_write": false, 00:06:15.881 "abort": true, 00:06:15.881 "seek_hole": false, 00:06:15.881 "seek_data": false, 00:06:15.881 "copy": true, 00:06:15.881 "nvme_iov_md": false 00:06:15.881 }, 00:06:15.881 "memory_domains": [ 00:06:15.881 { 00:06:15.881 "dma_device_id": "system", 00:06:15.881 "dma_device_type": 1 00:06:15.881 }, 00:06:15.881 { 00:06:15.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.881 "dma_device_type": 2 00:06:15.881 } 00:06:15.881 ], 00:06:15.881 "driver_specific": {} 00:06:15.881 } 00:06:15.881 ]' 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:15.881 00:33:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:15.881 00:06:15.881 real 0m0.106s 00:06:15.881 user 0m0.066s 00:06:15.881 sys 0m0.012s 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 ************************************ 00:06:15.881 END TEST rpc_plugins 00:06:15.881 ************************************ 00:06:15.881 00:33:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:15.881 00:33:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.881 00:33:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.881 00:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 ************************************ 00:06:15.881 START TEST rpc_trace_cmd_test 00:06:15.881 ************************************ 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.881 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:15.881 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110986", 00:06:15.881 "tpoint_group_mask": "0x8", 00:06:15.881 "iscsi_conn": { 00:06:15.881 "mask": "0x2", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "scsi": { 00:06:15.881 "mask": "0x4", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "bdev": { 00:06:15.881 "mask": "0x8", 00:06:15.881 "tpoint_mask": "0xffffffffffffffff" 00:06:15.881 }, 00:06:15.881 "nvmf_rdma": { 00:06:15.881 "mask": "0x10", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "nvmf_tcp": { 00:06:15.881 "mask": "0x20", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "ftl": { 00:06:15.881 "mask": "0x40", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "blobfs": { 00:06:15.881 "mask": "0x80", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "dsa": { 00:06:15.881 "mask": "0x200", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "thread": { 00:06:15.881 "mask": "0x400", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "nvme_pcie": { 00:06:15.881 "mask": "0x800", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "iaa": { 00:06:15.881 "mask": "0x1000", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "nvme_tcp": { 00:06:15.881 "mask": "0x2000", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.881 }, 00:06:15.881 "bdev_nvme": { 00:06:15.881 "mask": "0x4000", 00:06:15.881 "tpoint_mask": "0x0" 00:06:15.882 }, 00:06:15.882 "sock": { 00:06:15.882 "mask": "0x8000", 00:06:15.882 "tpoint_mask": "0x0" 00:06:15.882 }, 00:06:15.882 "blob": { 00:06:15.882 "mask": "0x10000", 00:06:15.882 "tpoint_mask": "0x0" 00:06:15.882 }, 00:06:15.882 "bdev_raid": { 00:06:15.882 "mask": "0x20000", 00:06:15.882 "tpoint_mask": "0x0" 00:06:15.882 }, 00:06:15.882 "scheduler": { 00:06:15.882 "mask": "0x40000", 00:06:15.882 "tpoint_mask": "0x0" 00:06:15.882 } 00:06:15.882 }' 00:06:15.882 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.882 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:15.882 00:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.882 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.882 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:16.141 00:06:16.141 real 0m0.182s 00:06:16.141 user 0m0.163s 00:06:16.141 sys 0m0.012s 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 ************************************ 00:06:16.141 END TEST rpc_trace_cmd_test 00:06:16.141 ************************************ 00:06:16.141 00:33:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:16.141 00:33:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:16.141 00:33:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:16.141 00:33:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.141 00:33:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.141 00:33:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 ************************************ 00:06:16.141 START TEST rpc_daemon_integrity 00:06:16.141 ************************************ 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:16.141 { 00:06:16.141 "name": "Malloc2", 00:06:16.141 "aliases": [ 00:06:16.141 "86cacd49-c868-4e90-aded-b81b4cf075d3" 00:06:16.141 ], 00:06:16.141 "product_name": "Malloc disk", 00:06:16.141 "block_size": 512, 00:06:16.141 "num_blocks": 16384, 00:06:16.141 "uuid": "86cacd49-c868-4e90-aded-b81b4cf075d3", 00:06:16.141 "assigned_rate_limits": { 00:06:16.141 "rw_ios_per_sec": 0, 00:06:16.141 "rw_mbytes_per_sec": 0, 00:06:16.141 "r_mbytes_per_sec": 0, 00:06:16.141 "w_mbytes_per_sec": 0 00:06:16.141 }, 00:06:16.141 "claimed": false, 00:06:16.141 "zoned": false, 00:06:16.141 "supported_io_types": { 00:06:16.141 "read": true, 00:06:16.141 "write": true, 00:06:16.141 "unmap": true, 00:06:16.141 "flush": true, 00:06:16.141 "reset": true, 00:06:16.141 "nvme_admin": false, 00:06:16.141 "nvme_io": false, 00:06:16.141 "nvme_io_md": false, 00:06:16.141 "write_zeroes": true, 00:06:16.141 "zcopy": true, 00:06:16.141 "get_zone_info": false, 00:06:16.141 "zone_management": false, 00:06:16.141 "zone_append": false, 00:06:16.141 "compare": false, 00:06:16.141 "compare_and_write": false, 00:06:16.141 "abort": true, 00:06:16.141 "seek_hole": false, 00:06:16.141 "seek_data": false, 00:06:16.141 "copy": true, 00:06:16.141 "nvme_iov_md": false 00:06:16.141 }, 00:06:16.141 "memory_domains": [ 00:06:16.141 { 00:06:16.141 "dma_device_id": "system", 00:06:16.141 "dma_device_type": 1 00:06:16.141 }, 00:06:16.141 { 00:06:16.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.141 "dma_device_type": 2 00:06:16.141 } 00:06:16.141 ], 00:06:16.141 "driver_specific": {} 00:06:16.141 } 00:06:16.141 ]' 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 [2024-12-07 00:33:32.264216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:16.141 [2024-12-07 00:33:32.264256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.141 [2024-12-07 00:33:32.264299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1223a80 00:06:16.141 [2024-12-07 00:33:32.264313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.141 [2024-12-07 00:33:32.265467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.141 [2024-12-07 00:33:32.265489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:16.141 Passthru0 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.141 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:16.141 { 00:06:16.141 "name": "Malloc2", 00:06:16.141 "aliases": [ 00:06:16.141 "86cacd49-c868-4e90-aded-b81b4cf075d3" 00:06:16.141 ], 00:06:16.141 "product_name": "Malloc disk", 00:06:16.141 "block_size": 512, 00:06:16.141 "num_blocks": 16384, 00:06:16.141 "uuid": "86cacd49-c868-4e90-aded-b81b4cf075d3", 00:06:16.141 "assigned_rate_limits": { 00:06:16.141 "rw_ios_per_sec": 0, 00:06:16.141 "rw_mbytes_per_sec": 0, 00:06:16.141 "r_mbytes_per_sec": 0, 00:06:16.141 "w_mbytes_per_sec": 0 00:06:16.141 }, 00:06:16.141 "claimed": true, 00:06:16.141 "claim_type": "exclusive_write", 00:06:16.141 "zoned": false, 00:06:16.141 "supported_io_types": { 00:06:16.141 "read": true, 00:06:16.141 "write": true, 00:06:16.141 "unmap": true, 00:06:16.141 "flush": true, 00:06:16.141 "reset": true, 00:06:16.141 "nvme_admin": false, 00:06:16.141 "nvme_io": false, 00:06:16.141 "nvme_io_md": false, 00:06:16.141 "write_zeroes": true, 00:06:16.141 "zcopy": true, 00:06:16.141 "get_zone_info": false, 00:06:16.141 "zone_management": false, 00:06:16.141 "zone_append": false, 00:06:16.141 "compare": false, 00:06:16.141 "compare_and_write": false, 00:06:16.141 "abort": true, 00:06:16.141 "seek_hole": false, 00:06:16.141 "seek_data": false, 00:06:16.141 "copy": true, 00:06:16.141 "nvme_iov_md": false 00:06:16.141 }, 00:06:16.141 "memory_domains": [ 00:06:16.141 { 00:06:16.141 "dma_device_id": "system", 00:06:16.141 "dma_device_type": 1 00:06:16.141 }, 00:06:16.141 { 00:06:16.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.141 "dma_device_type": 2 00:06:16.141 } 00:06:16.141 ], 00:06:16.141 "driver_specific": {} 00:06:16.141 }, 00:06:16.141 { 00:06:16.141 "name": "Passthru0", 00:06:16.141 "aliases": [ 00:06:16.141 "ea26dbc2-a839-5d9f-b958-a01e477b4805" 00:06:16.141 ], 00:06:16.141 "product_name": "passthru", 00:06:16.141 "block_size": 512, 00:06:16.141 "num_blocks": 16384, 00:06:16.141 "uuid": "ea26dbc2-a839-5d9f-b958-a01e477b4805", 00:06:16.141 "assigned_rate_limits": { 00:06:16.141 "rw_ios_per_sec": 0, 00:06:16.141 "rw_mbytes_per_sec": 0, 00:06:16.141 "r_mbytes_per_sec": 0, 00:06:16.141 "w_mbytes_per_sec": 0 00:06:16.141 }, 00:06:16.141 "claimed": false, 00:06:16.141 "zoned": false, 00:06:16.141 "supported_io_types": { 00:06:16.141 "read": true, 00:06:16.141 "write": true, 00:06:16.141 "unmap": true, 00:06:16.141 "flush": true, 00:06:16.141 "reset": true, 00:06:16.141 "nvme_admin": false, 00:06:16.141 "nvme_io": false, 00:06:16.141 "nvme_io_md": false, 00:06:16.141 "write_zeroes": true, 00:06:16.141 "zcopy": true, 00:06:16.141 "get_zone_info": false, 00:06:16.141 "zone_management": false, 00:06:16.141 "zone_append": false, 00:06:16.141 "compare": false, 00:06:16.141 "compare_and_write": false, 00:06:16.141 "abort": true, 00:06:16.141 "seek_hole": false, 00:06:16.141 "seek_data": false, 00:06:16.142 "copy": true, 00:06:16.142 "nvme_iov_md": false 00:06:16.142 }, 00:06:16.142 "memory_domains": [ 00:06:16.142 { 00:06:16.142 "dma_device_id": "system", 00:06:16.142 "dma_device_type": 1 00:06:16.142 }, 00:06:16.142 { 00:06:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.142 "dma_device_type": 2 00:06:16.142 } 00:06:16.142 ], 00:06:16.142 "driver_specific": { 00:06:16.142 "passthru": { 00:06:16.142 "name": "Passthru0", 00:06:16.142 "base_bdev_name": "Malloc2" 00:06:16.142 } 00:06:16.142 } 00:06:16.142 } 00:06:16.142 ]' 00:06:16.142 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:16.400 00:06:16.400 real 0m0.212s 00:06:16.400 user 0m0.139s 00:06:16.400 sys 0m0.018s 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.400 00:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.400 ************************************ 00:06:16.400 END TEST rpc_daemon_integrity 00:06:16.400 ************************************ 00:06:16.400 00:33:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:16.400 00:33:32 rpc -- rpc/rpc.sh@84 -- # killprocess 110986 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 110986 ']' 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@958 -- # kill -0 110986 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110986 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110986' 00:06:16.400 killing process with pid 110986 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@973 -- # kill 110986 00:06:16.400 00:33:32 rpc -- common/autotest_common.sh@978 -- # wait 110986 00:06:16.660 00:06:16.660 real 0m1.906s 00:06:16.660 user 0m2.361s 00:06:16.660 sys 0m0.622s 00:06:16.660 00:33:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.660 00:33:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.660 ************************************ 00:06:16.660 END TEST rpc 00:06:16.660 ************************************ 00:06:16.920 00:33:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.920 00:33:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.920 00:33:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.920 00:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:16.920 ************************************ 00:06:16.920 START TEST skip_rpc 00:06:16.920 ************************************ 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.920 * Looking for test storage... 00:06:16.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.920 00:33:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:16.920 00:33:32 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.920 --rc genhtml_branch_coverage=1 00:06:16.920 --rc genhtml_function_coverage=1 00:06:16.920 --rc genhtml_legend=1 00:06:16.920 --rc geninfo_all_blocks=1 00:06:16.920 --rc geninfo_unexecuted_blocks=1 00:06:16.920 00:06:16.920 ' 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.920 --rc genhtml_branch_coverage=1 00:06:16.920 --rc genhtml_function_coverage=1 00:06:16.920 --rc genhtml_legend=1 00:06:16.920 --rc geninfo_all_blocks=1 00:06:16.920 --rc geninfo_unexecuted_blocks=1 00:06:16.920 00:06:16.920 ' 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.920 --rc genhtml_branch_coverage=1 00:06:16.920 --rc genhtml_function_coverage=1 00:06:16.920 --rc genhtml_legend=1 00:06:16.920 --rc geninfo_all_blocks=1 00:06:16.920 --rc geninfo_unexecuted_blocks=1 00:06:16.920 00:06:16.920 ' 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.920 --rc genhtml_branch_coverage=1 00:06:16.920 --rc genhtml_function_coverage=1 00:06:16.920 --rc genhtml_legend=1 00:06:16.920 --rc geninfo_all_blocks=1 00:06:16.920 --rc geninfo_unexecuted_blocks=1 00:06:16.920 00:06:16.920 ' 00:06:16.920 00:33:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.920 00:33:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:16.920 00:33:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.920 00:33:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.920 ************************************ 00:06:16.920 START TEST skip_rpc 00:06:16.920 ************************************ 00:06:16.920 00:33:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:16.920 00:33:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111342 00:06:16.920 00:33:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:16.920 00:33:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.920 00:33:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:17.181 [2024-12-07 00:33:33.084716] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:17.181 [2024-12-07 00:33:33.084781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111342 ] 00:06:17.181 [2024-12-07 00:33:33.152048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.181 [2024-12-07 00:33:33.199842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111342 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 111342 ']' 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 111342 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111342 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111342' 00:06:22.454 killing process with pid 111342 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 111342 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 111342 00:06:22.454 00:06:22.454 real 0m5.431s 00:06:22.454 user 0m5.134s 00:06:22.454 sys 0m0.312s 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.454 00:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 ************************************ 00:06:22.454 END TEST skip_rpc 00:06:22.454 ************************************ 00:06:22.454 00:33:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:22.454 00:33:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.454 00:33:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.454 00:33:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 ************************************ 00:06:22.454 START TEST skip_rpc_with_json 00:06:22.454 ************************************ 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112034 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112034 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 112034 ']' 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.454 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 [2024-12-07 00:33:38.564384] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:22.454 [2024-12-07 00:33:38.564478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112034 ] 00:06:22.714 [2024-12-07 00:33:38.633390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.714 [2024-12-07 00:33:38.682700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.973 [2024-12-07 00:33:38.948008] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:22.973 request: 00:06:22.973 { 00:06:22.973 "trtype": "tcp", 00:06:22.973 "method": "nvmf_get_transports", 00:06:22.973 "req_id": 1 00:06:22.973 } 00:06:22.973 Got JSON-RPC error response 00:06:22.973 response: 00:06:22.973 { 00:06:22.973 "code": -19, 00:06:22.973 "message": "No such device" 00:06:22.973 } 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.973 [2024-12-07 00:33:38.956144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.973 00:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.973 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.973 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.973 { 00:06:22.973 "subsystems": [ 00:06:22.973 { 00:06:22.973 "subsystem": "fsdev", 00:06:22.973 "config": [ 00:06:22.973 { 00:06:22.973 "method": "fsdev_set_opts", 00:06:22.973 "params": { 00:06:22.973 "fsdev_io_pool_size": 65535, 00:06:22.973 "fsdev_io_cache_size": 256 00:06:22.973 } 00:06:22.973 } 00:06:22.973 ] 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "subsystem": "vfio_user_target", 00:06:22.973 "config": null 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "subsystem": "keyring", 00:06:22.973 "config": [] 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "subsystem": "iobuf", 00:06:22.973 "config": [ 00:06:22.973 { 00:06:22.973 "method": "iobuf_set_options", 00:06:22.973 "params": { 00:06:22.973 "small_pool_count": 8192, 00:06:22.973 "large_pool_count": 1024, 00:06:22.973 "small_bufsize": 8192, 00:06:22.973 "large_bufsize": 135168, 00:06:22.973 "enable_numa": false 00:06:22.973 } 00:06:22.973 } 00:06:22.973 ] 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "subsystem": "sock", 00:06:22.973 "config": [ 00:06:22.973 { 00:06:22.973 "method": "sock_set_default_impl", 00:06:22.973 "params": { 00:06:22.973 "impl_name": "posix" 00:06:22.973 } 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "method": "sock_impl_set_options", 00:06:22.973 "params": { 00:06:22.973 "impl_name": "ssl", 00:06:22.973 "recv_buf_size": 4096, 00:06:22.973 "send_buf_size": 4096, 00:06:22.973 "enable_recv_pipe": true, 00:06:22.973 "enable_quickack": false, 00:06:22.973 "enable_placement_id": 0, 00:06:22.973 "enable_zerocopy_send_server": true, 00:06:22.973 "enable_zerocopy_send_client": false, 00:06:22.973 "zerocopy_threshold": 0, 00:06:22.973 "tls_version": 0, 00:06:22.973 "enable_ktls": false 00:06:22.973 } 00:06:22.973 }, 00:06:22.973 { 00:06:22.973 "method": "sock_impl_set_options", 00:06:22.973 "params": { 00:06:22.973 "impl_name": "posix", 00:06:22.973 "recv_buf_size": 2097152, 00:06:22.974 "send_buf_size": 2097152, 00:06:22.974 "enable_recv_pipe": true, 00:06:22.974 "enable_quickack": false, 00:06:22.974 "enable_placement_id": 0, 00:06:22.974 "enable_zerocopy_send_server": true, 00:06:22.974 "enable_zerocopy_send_client": false, 00:06:22.974 "zerocopy_threshold": 0, 00:06:22.974 "tls_version": 0, 00:06:22.974 "enable_ktls": false 00:06:22.974 } 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "vmd", 00:06:22.974 "config": [] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "accel", 00:06:22.974 "config": [ 00:06:22.974 { 00:06:22.974 "method": "accel_set_options", 00:06:22.974 "params": { 00:06:22.974 "small_cache_size": 128, 00:06:22.974 "large_cache_size": 16, 00:06:22.974 "task_count": 2048, 00:06:22.974 "sequence_count": 2048, 00:06:22.974 "buf_count": 2048 00:06:22.974 } 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "bdev", 00:06:22.974 "config": [ 00:06:22.974 { 00:06:22.974 "method": "bdev_set_options", 00:06:22.974 "params": { 00:06:22.974 "bdev_io_pool_size": 65535, 00:06:22.974 "bdev_io_cache_size": 256, 00:06:22.974 "bdev_auto_examine": true, 00:06:22.974 "iobuf_small_cache_size": 128, 00:06:22.974 "iobuf_large_cache_size": 16 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "bdev_raid_set_options", 00:06:22.974 "params": { 00:06:22.974 "process_window_size_kb": 1024, 00:06:22.974 "process_max_bandwidth_mb_sec": 0 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "bdev_iscsi_set_options", 00:06:22.974 "params": { 00:06:22.974 "timeout_sec": 30 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "bdev_nvme_set_options", 00:06:22.974 "params": { 00:06:22.974 "action_on_timeout": "none", 00:06:22.974 "timeout_us": 0, 00:06:22.974 "timeout_admin_us": 0, 00:06:22.974 "keep_alive_timeout_ms": 10000, 00:06:22.974 "arbitration_burst": 0, 00:06:22.974 "low_priority_weight": 0, 00:06:22.974 "medium_priority_weight": 0, 00:06:22.974 "high_priority_weight": 0, 00:06:22.974 "nvme_adminq_poll_period_us": 10000, 00:06:22.974 "nvme_ioq_poll_period_us": 0, 00:06:22.974 "io_queue_requests": 0, 00:06:22.974 "delay_cmd_submit": true, 00:06:22.974 "transport_retry_count": 4, 00:06:22.974 "bdev_retry_count": 3, 00:06:22.974 "transport_ack_timeout": 0, 00:06:22.974 "ctrlr_loss_timeout_sec": 0, 00:06:22.974 "reconnect_delay_sec": 0, 00:06:22.974 "fast_io_fail_timeout_sec": 0, 00:06:22.974 "disable_auto_failback": false, 00:06:22.974 "generate_uuids": false, 00:06:22.974 "transport_tos": 0, 00:06:22.974 "nvme_error_stat": false, 00:06:22.974 "rdma_srq_size": 0, 00:06:22.974 "io_path_stat": false, 00:06:22.974 "allow_accel_sequence": false, 00:06:22.974 "rdma_max_cq_size": 0, 00:06:22.974 "rdma_cm_event_timeout_ms": 0, 00:06:22.974 "dhchap_digests": [ 00:06:22.974 "sha256", 00:06:22.974 "sha384", 00:06:22.974 "sha512" 00:06:22.974 ], 00:06:22.974 "dhchap_dhgroups": [ 00:06:22.974 "null", 00:06:22.974 "ffdhe2048", 00:06:22.974 "ffdhe3072", 00:06:22.974 "ffdhe4096", 00:06:22.974 "ffdhe6144", 00:06:22.974 "ffdhe8192" 00:06:22.974 ] 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "bdev_nvme_set_hotplug", 00:06:22.974 "params": { 00:06:22.974 "period_us": 100000, 00:06:22.974 "enable": false 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "bdev_wait_for_examine" 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "scsi", 00:06:22.974 "config": null 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "scheduler", 00:06:22.974 "config": [ 00:06:22.974 { 00:06:22.974 "method": "framework_set_scheduler", 00:06:22.974 "params": { 00:06:22.974 "name": "static" 00:06:22.974 } 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "vhost_scsi", 00:06:22.974 "config": [] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "vhost_blk", 00:06:22.974 "config": [] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "ublk", 00:06:22.974 "config": [] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "nbd", 00:06:22.974 "config": [] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "nvmf", 00:06:22.974 "config": [ 00:06:22.974 { 00:06:22.974 "method": "nvmf_set_config", 00:06:22.974 "params": { 00:06:22.974 "discovery_filter": "match_any", 00:06:22.974 "admin_cmd_passthru": { 00:06:22.974 "identify_ctrlr": false 00:06:22.974 }, 00:06:22.974 "dhchap_digests": [ 00:06:22.974 "sha256", 00:06:22.974 "sha384", 00:06:22.974 "sha512" 00:06:22.974 ], 00:06:22.974 "dhchap_dhgroups": [ 00:06:22.974 "null", 00:06:22.974 "ffdhe2048", 00:06:22.974 "ffdhe3072", 00:06:22.974 "ffdhe4096", 00:06:22.974 "ffdhe6144", 00:06:22.974 "ffdhe8192" 00:06:22.974 ] 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "nvmf_set_max_subsystems", 00:06:22.974 "params": { 00:06:22.974 "max_subsystems": 1024 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "nvmf_set_crdt", 00:06:22.974 "params": { 00:06:22.974 "crdt1": 0, 00:06:22.974 "crdt2": 0, 00:06:22.974 "crdt3": 0 00:06:22.974 } 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "method": "nvmf_create_transport", 00:06:22.974 "params": { 00:06:22.974 "trtype": "TCP", 00:06:22.974 "max_queue_depth": 128, 00:06:22.974 "max_io_qpairs_per_ctrlr": 127, 00:06:22.974 "in_capsule_data_size": 4096, 00:06:22.974 "max_io_size": 131072, 00:06:22.974 "io_unit_size": 131072, 00:06:22.974 "max_aq_depth": 128, 00:06:22.974 "num_shared_buffers": 511, 00:06:22.974 "buf_cache_size": 4294967295, 00:06:22.974 "dif_insert_or_strip": false, 00:06:22.974 "zcopy": false, 00:06:22.974 "c2h_success": true, 00:06:22.974 "sock_priority": 0, 00:06:22.974 "abort_timeout_sec": 1, 00:06:22.974 "ack_timeout": 0, 00:06:22.974 "data_wr_pool_size": 0 00:06:22.974 } 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 }, 00:06:22.974 { 00:06:22.974 "subsystem": "iscsi", 00:06:22.974 "config": [ 00:06:22.974 { 00:06:22.974 "method": "iscsi_set_options", 00:06:22.974 "params": { 00:06:22.974 "node_base": "iqn.2016-06.io.spdk", 00:06:22.974 "max_sessions": 128, 00:06:22.974 "max_connections_per_session": 2, 00:06:22.974 "max_queue_depth": 64, 00:06:22.974 "default_time2wait": 2, 00:06:22.974 "default_time2retain": 20, 00:06:22.974 "first_burst_length": 8192, 00:06:22.974 "immediate_data": true, 00:06:22.974 "allow_duplicated_isid": false, 00:06:22.974 "error_recovery_level": 0, 00:06:22.974 "nop_timeout": 60, 00:06:22.974 "nop_in_interval": 30, 00:06:22.974 "disable_chap": false, 00:06:22.974 "require_chap": false, 00:06:22.974 "mutual_chap": false, 00:06:22.974 "chap_group": 0, 00:06:22.974 "max_large_datain_per_connection": 64, 00:06:22.974 "max_r2t_per_connection": 4, 00:06:22.974 "pdu_pool_size": 36864, 00:06:22.974 "immediate_data_pool_size": 16384, 00:06:22.974 "data_out_pool_size": 2048 00:06:22.974 } 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 } 00:06:22.974 ] 00:06:22.974 } 00:06:22.974 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:22.974 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112034 00:06:22.974 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 112034 ']' 00:06:22.974 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 112034 00:06:22.974 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112034 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112034' 00:06:23.234 killing process with pid 112034 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 112034 00:06:23.234 00:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 112034 00:06:23.492 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112172 00:06:23.492 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.492 00:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112172 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 112172 ']' 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 112172 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112172 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112172' 00:06:28.757 killing process with pid 112172 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 112172 00:06:28.757 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 112172 00:06:29.015 00:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:29.015 00:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:29.015 00:06:29.015 real 0m6.473s 00:06:29.015 user 0m6.122s 00:06:29.015 sys 0m0.694s 00:06:29.015 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.015 00:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 ************************************ 00:06:29.015 END TEST skip_rpc_with_json 00:06:29.015 ************************************ 00:06:29.015 00:33:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 ************************************ 00:06:29.015 START TEST skip_rpc_with_delay 00:06:29.015 ************************************ 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:29.015 [2024-12-07 00:33:45.092469] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.015 00:06:29.015 real 0m0.074s 00:06:29.015 user 0m0.047s 00:06:29.015 sys 0m0.026s 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.015 00:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 ************************************ 00:06:29.015 END TEST skip_rpc_with_delay 00:06:29.015 ************************************ 00:06:29.015 00:33:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:29.015 00:33:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:29.015 00:33:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.015 00:33:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 ************************************ 00:06:29.015 START TEST exit_on_failed_rpc_init 00:06:29.015 ************************************ 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=112882 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 112882 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 112882 ']' 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.015 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.273 [2024-12-07 00:33:45.215255] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:29.273 [2024-12-07 00:33:45.215393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112882 ] 00:06:29.273 [2024-12-07 00:33:45.282322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.273 [2024-12-07 00:33:45.329192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:29.531 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.531 [2024-12-07 00:33:45.639291] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:29.531 [2024-12-07 00:33:45.639367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112904 ] 00:06:29.789 [2024-12-07 00:33:45.705221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.789 [2024-12-07 00:33:45.751512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.789 [2024-12-07 00:33:45.751628] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:29.789 [2024-12-07 00:33:45.751648] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:29.789 [2024-12-07 00:33:45.751659] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 112882 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 112882 ']' 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 112882 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112882 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112882' 00:06:29.789 killing process with pid 112882 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 112882 00:06:29.789 00:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 112882 00:06:30.355 00:06:30.355 real 0m1.077s 00:06:30.355 user 0m1.151s 00:06:30.355 sys 0m0.431s 00:06:30.355 00:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.355 00:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:30.355 ************************************ 00:06:30.355 END TEST exit_on_failed_rpc_init 00:06:30.355 ************************************ 00:06:30.355 00:33:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.355 00:06:30.355 real 0m13.408s 00:06:30.355 user 0m12.623s 00:06:30.355 sys 0m1.668s 00:06:30.355 00:33:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.355 00:33:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.355 ************************************ 00:06:30.355 END TEST skip_rpc 00:06:30.355 ************************************ 00:06:30.355 00:33:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.355 00:33:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.355 00:33:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.355 00:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:30.355 ************************************ 00:06:30.355 START TEST rpc_client 00:06:30.355 ************************************ 00:06:30.355 00:33:46 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.355 * Looking for test storage... 00:06:30.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:30.355 00:33:46 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.355 00:33:46 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.355 00:33:46 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.355 00:33:46 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:30.355 00:33:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.356 00:33:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.356 --rc genhtml_branch_coverage=1 00:06:30.356 --rc genhtml_function_coverage=1 00:06:30.356 --rc genhtml_legend=1 00:06:30.356 --rc geninfo_all_blocks=1 00:06:30.356 --rc geninfo_unexecuted_blocks=1 00:06:30.356 00:06:30.356 ' 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.356 --rc genhtml_branch_coverage=1 00:06:30.356 --rc genhtml_function_coverage=1 00:06:30.356 --rc genhtml_legend=1 00:06:30.356 --rc geninfo_all_blocks=1 00:06:30.356 --rc geninfo_unexecuted_blocks=1 00:06:30.356 00:06:30.356 ' 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.356 --rc genhtml_branch_coverage=1 00:06:30.356 --rc genhtml_function_coverage=1 00:06:30.356 --rc genhtml_legend=1 00:06:30.356 --rc geninfo_all_blocks=1 00:06:30.356 --rc geninfo_unexecuted_blocks=1 00:06:30.356 00:06:30.356 ' 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.356 --rc genhtml_branch_coverage=1 00:06:30.356 --rc genhtml_function_coverage=1 00:06:30.356 --rc genhtml_legend=1 00:06:30.356 --rc geninfo_all_blocks=1 00:06:30.356 --rc geninfo_unexecuted_blocks=1 00:06:30.356 00:06:30.356 ' 00:06:30.356 00:33:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:30.356 OK 00:06:30.356 00:33:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.356 00:06:30.356 real 0m0.158s 00:06:30.356 user 0m0.105s 00:06:30.356 sys 0m0.060s 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.356 00:33:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.356 ************************************ 00:06:30.356 END TEST rpc_client 00:06:30.356 ************************************ 00:06:30.356 00:33:46 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.356 00:33:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.356 00:33:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.356 00:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:30.615 ************************************ 00:06:30.615 START TEST json_config 00:06:30.615 ************************************ 00:06:30.615 00:33:46 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.615 00:33:46 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.615 00:33:46 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.615 00:33:46 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.615 00:33:46 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.615 00:33:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.615 00:33:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.615 00:33:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.615 00:33:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.615 00:33:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.615 00:33:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.615 00:33:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.615 00:33:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.615 00:33:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.615 00:33:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.615 00:33:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.615 00:33:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:30.615 00:33:46 json_config -- scripts/common.sh@345 -- # : 1 00:06:30.615 00:33:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.616 00:33:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.616 00:33:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:30.616 00:33:46 json_config -- scripts/common.sh@353 -- # local d=1 00:06:30.616 00:33:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.616 00:33:46 json_config -- scripts/common.sh@355 -- # echo 1 00:06:30.616 00:33:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.616 00:33:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:30.616 00:33:46 json_config -- scripts/common.sh@353 -- # local d=2 00:06:30.616 00:33:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.616 00:33:46 json_config -- scripts/common.sh@355 -- # echo 2 00:06:30.616 00:33:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.616 00:33:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.616 00:33:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.616 00:33:46 json_config -- scripts/common.sh@368 -- # return 0 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.616 --rc genhtml_branch_coverage=1 00:06:30.616 --rc genhtml_function_coverage=1 00:06:30.616 --rc genhtml_legend=1 00:06:30.616 --rc geninfo_all_blocks=1 00:06:30.616 --rc geninfo_unexecuted_blocks=1 00:06:30.616 00:06:30.616 ' 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.616 --rc genhtml_branch_coverage=1 00:06:30.616 --rc genhtml_function_coverage=1 00:06:30.616 --rc genhtml_legend=1 00:06:30.616 --rc geninfo_all_blocks=1 00:06:30.616 --rc geninfo_unexecuted_blocks=1 00:06:30.616 00:06:30.616 ' 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.616 --rc genhtml_branch_coverage=1 00:06:30.616 --rc genhtml_function_coverage=1 00:06:30.616 --rc genhtml_legend=1 00:06:30.616 --rc geninfo_all_blocks=1 00:06:30.616 --rc geninfo_unexecuted_blocks=1 00:06:30.616 00:06:30.616 ' 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.616 --rc genhtml_branch_coverage=1 00:06:30.616 --rc genhtml_function_coverage=1 00:06:30.616 --rc genhtml_legend=1 00:06:30.616 --rc geninfo_all_blocks=1 00:06:30.616 --rc geninfo_unexecuted_blocks=1 00:06:30.616 00:06:30.616 ' 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.616 00:33:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.616 00:33:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.616 00:33:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.616 00:33:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.616 00:33:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.616 00:33:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.616 00:33:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.616 00:33:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.616 00:33:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@51 -- # : 0 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.616 00:33:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:30.616 INFO: JSON configuration test init 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.616 00:33:46 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:30.616 00:33:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.616 00:33:46 json_config -- json_config/common.sh@10 -- # shift 00:06:30.616 00:33:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.616 00:33:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.616 00:33:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.616 00:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.616 00:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.616 00:33:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=113164 00:06:30.616 00:33:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:30.616 00:33:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.616 Waiting for target to run... 00:06:30.616 00:33:46 json_config -- json_config/common.sh@25 -- # waitforlisten 113164 /var/tmp/spdk_tgt.sock 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 113164 ']' 00:06:30.616 00:33:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.617 00:33:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.617 00:33:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.617 00:33:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.617 00:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.617 [2024-12-07 00:33:46.740563] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:30.617 [2024-12-07 00:33:46.740660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113164 ] 00:06:31.187 [2024-12-07 00:33:47.262271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.187 [2024-12-07 00:33:47.303423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:31.769 00:33:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.769 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.769 00:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:31.769 00:33:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:31.769 00:33:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:35.058 00:33:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.058 00:33:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:35.058 00:33:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:35.058 00:33:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@54 -- # sort 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:35.058 00:33:51 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:35.058 00:33:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.058 00:33:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:35.318 00:33:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.318 00:33:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:35.318 00:33:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:35.318 00:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:35.576 MallocForNvmf0 00:06:35.576 00:33:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:35.576 00:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:35.834 MallocForNvmf1 00:06:35.834 00:33:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:35.834 00:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:36.093 [2024-12-07 00:33:51.994349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.093 00:33:52 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:36.093 00:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:36.352 00:33:52 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:36.352 00:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:36.611 00:33:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:36.611 00:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:36.869 00:33:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:36.869 00:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:37.128 [2024-12-07 00:33:53.045668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:37.128 00:33:53 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:37.128 00:33:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.128 00:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.128 00:33:53 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:37.128 00:33:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.128 00:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.128 00:33:53 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:37.128 00:33:53 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:37.128 00:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:37.387 MallocBdevForConfigChangeCheck 00:06:37.387 00:33:53 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:37.387 00:33:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.387 00:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.387 00:33:53 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:37.387 00:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.953 00:33:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:37.953 INFO: shutting down applications... 00:06:37.953 00:33:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:37.953 00:33:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:37.953 00:33:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:37.953 00:33:53 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:39.328 Calling clear_iscsi_subsystem 00:06:39.328 Calling clear_nvmf_subsystem 00:06:39.328 Calling clear_nbd_subsystem 00:06:39.328 Calling clear_ublk_subsystem 00:06:39.328 Calling clear_vhost_blk_subsystem 00:06:39.328 Calling clear_vhost_scsi_subsystem 00:06:39.328 Calling clear_bdev_subsystem 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:39.328 00:33:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:39.893 00:33:55 json_config -- json_config/json_config.sh@352 -- # break 00:06:39.893 00:33:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:39.893 00:33:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:39.893 00:33:55 json_config -- json_config/common.sh@31 -- # local app=target 00:06:39.893 00:33:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:39.893 00:33:55 json_config -- json_config/common.sh@35 -- # [[ -n 113164 ]] 00:06:39.893 00:33:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 113164 00:06:39.893 00:33:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:39.893 00:33:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.893 00:33:55 json_config -- json_config/common.sh@41 -- # kill -0 113164 00:06:39.893 00:33:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:40.461 00:33:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:40.461 00:33:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.461 00:33:56 json_config -- json_config/common.sh@41 -- # kill -0 113164 00:06:40.461 00:33:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:40.461 00:33:56 json_config -- json_config/common.sh@43 -- # break 00:06:40.461 00:33:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:40.461 00:33:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:40.461 SPDK target shutdown done 00:06:40.461 00:33:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:40.461 INFO: relaunching applications... 00:06:40.461 00:33:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:40.461 00:33:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:40.461 00:33:56 json_config -- json_config/common.sh@10 -- # shift 00:06:40.461 00:33:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.461 00:33:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.461 00:33:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.461 00:33:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.461 00:33:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.461 00:33:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=114475 00:06:40.461 00:33:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:40.461 00:33:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.461 Waiting for target to run... 00:06:40.461 00:33:56 json_config -- json_config/common.sh@25 -- # waitforlisten 114475 /var/tmp/spdk_tgt.sock 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 114475 ']' 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.461 00:33:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.461 [2024-12-07 00:33:56.438329] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:40.461 [2024-12-07 00:33:56.438408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114475 ] 00:06:41.033 [2024-12-07 00:33:56.929336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.033 [2024-12-07 00:33:56.970004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.322 [2024-12-07 00:34:00.016506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.322 [2024-12-07 00:34:00.048954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:44.322 00:34:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.322 00:34:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:44.322 00:34:00 json_config -- json_config/common.sh@26 -- # echo '' 00:06:44.322 00:06:44.322 00:34:00 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:44.322 00:34:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:44.322 INFO: Checking if target configuration is the same... 00:06:44.322 00:34:00 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.322 00:34:00 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:44.322 00:34:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.322 + '[' 2 -ne 2 ']' 00:06:44.322 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:44.322 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:44.322 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:44.322 +++ basename /dev/fd/62 00:06:44.322 ++ mktemp /tmp/62.XXX 00:06:44.322 + tmp_file_1=/tmp/62.Yyq 00:06:44.322 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.322 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.322 + tmp_file_2=/tmp/spdk_tgt_config.json.1tt 00:06:44.322 + ret=0 00:06:44.322 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.580 + diff -u /tmp/62.Yyq /tmp/spdk_tgt_config.json.1tt 00:06:44.580 + echo 'INFO: JSON config files are the same' 00:06:44.580 INFO: JSON config files are the same 00:06:44.580 + rm /tmp/62.Yyq /tmp/spdk_tgt_config.json.1tt 00:06:44.580 + exit 0 00:06:44.580 00:34:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:44.580 00:34:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:44.580 INFO: changing configuration and checking if this can be detected... 00:06:44.580 00:34:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.580 00:34:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.838 00:34:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.838 00:34:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:44.838 00:34:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.838 + '[' 2 -ne 2 ']' 00:06:44.838 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:44.838 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:44.838 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:44.838 +++ basename /dev/fd/62 00:06:44.838 ++ mktemp /tmp/62.XXX 00:06:44.838 + tmp_file_1=/tmp/62.hKL 00:06:44.838 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.838 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.838 + tmp_file_2=/tmp/spdk_tgt_config.json.Uia 00:06:44.838 + ret=0 00:06:44.838 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.097 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.356 + diff -u /tmp/62.hKL /tmp/spdk_tgt_config.json.Uia 00:06:45.356 + ret=1 00:06:45.356 + echo '=== Start of file: /tmp/62.hKL ===' 00:06:45.356 + cat /tmp/62.hKL 00:06:45.356 + echo '=== End of file: /tmp/62.hKL ===' 00:06:45.356 + echo '' 00:06:45.356 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Uia ===' 00:06:45.356 + cat /tmp/spdk_tgt_config.json.Uia 00:06:45.356 + echo '=== End of file: /tmp/spdk_tgt_config.json.Uia ===' 00:06:45.356 + echo '' 00:06:45.356 + rm /tmp/62.hKL /tmp/spdk_tgt_config.json.Uia 00:06:45.356 + exit 1 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:45.356 INFO: configuration change detected. 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 114475 ]] 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.356 00:34:01 json_config -- json_config/json_config.sh@330 -- # killprocess 114475 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@954 -- # '[' -z 114475 ']' 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@958 -- # kill -0 114475 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@959 -- # uname 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.356 00:34:01 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114475 00:06:45.357 00:34:01 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.357 00:34:01 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.357 00:34:01 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114475' 00:06:45.357 killing process with pid 114475 00:06:45.357 00:34:01 json_config -- common/autotest_common.sh@973 -- # kill 114475 00:06:45.357 00:34:01 json_config -- common/autotest_common.sh@978 -- # wait 114475 00:06:47.256 00:34:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.256 00:34:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:47.256 00:34:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.256 00:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.256 00:34:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:47.256 00:34:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:47.257 INFO: Success 00:06:47.257 00:06:47.257 real 0m16.446s 00:06:47.257 user 0m18.394s 00:06:47.257 sys 0m2.191s 00:06:47.257 00:34:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.257 00:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 ************************************ 00:06:47.257 END TEST json_config 00:06:47.257 ************************************ 00:06:47.257 00:34:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:47.257 00:34:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.257 00:34:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.257 00:34:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 ************************************ 00:06:47.257 START TEST json_config_extra_key 00:06:47.257 ************************************ 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 00:34:03 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.257 00:34:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.257 00:34:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.257 00:34:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.257 00:34:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.257 00:34:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:47.257 00:34:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.257 00:34:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:47.257 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:47.258 INFO: launching applications... 00:06:47.258 00:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=115397 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.258 Waiting for target to run... 00:06:47.258 00:34:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 115397 /var/tmp/spdk_tgt.sock 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 115397 ']' 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.258 00:34:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.258 [2024-12-07 00:34:03.210778] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:47.258 [2024-12-07 00:34:03.210853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115397 ] 00:06:47.828 [2024-12-07 00:34:03.720119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.828 [2024-12-07 00:34:03.761150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.086 00:34:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.086 00:34:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:48.086 00:06:48.086 00:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:48.086 INFO: shutting down applications... 00:06:48.086 00:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 115397 ]] 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 115397 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115397 00:06:48.086 00:34:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115397 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:48.652 00:34:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:48.652 SPDK target shutdown done 00:06:48.652 00:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:48.652 Success 00:06:48.652 00:06:48.652 real 0m1.673s 00:06:48.652 user 0m1.470s 00:06:48.652 sys 0m0.620s 00:06:48.652 00:34:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.652 00:34:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.652 ************************************ 00:06:48.652 END TEST json_config_extra_key 00:06:48.652 ************************************ 00:06:48.652 00:34:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.652 00:34:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.652 00:34:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.652 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:48.652 ************************************ 00:06:48.652 START TEST alias_rpc 00:06:48.652 ************************************ 00:06:48.652 00:34:04 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.652 * Looking for test storage... 00:06:48.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:48.652 00:34:04 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.652 00:34:04 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.652 00:34:04 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.911 00:34:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.911 --rc genhtml_branch_coverage=1 00:06:48.911 --rc genhtml_function_coverage=1 00:06:48.911 --rc genhtml_legend=1 00:06:48.911 --rc geninfo_all_blocks=1 00:06:48.911 --rc geninfo_unexecuted_blocks=1 00:06:48.911 00:06:48.911 ' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.911 --rc genhtml_branch_coverage=1 00:06:48.911 --rc genhtml_function_coverage=1 00:06:48.911 --rc genhtml_legend=1 00:06:48.911 --rc geninfo_all_blocks=1 00:06:48.911 --rc geninfo_unexecuted_blocks=1 00:06:48.911 00:06:48.911 ' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.911 --rc genhtml_branch_coverage=1 00:06:48.911 --rc genhtml_function_coverage=1 00:06:48.911 --rc genhtml_legend=1 00:06:48.911 --rc geninfo_all_blocks=1 00:06:48.911 --rc geninfo_unexecuted_blocks=1 00:06:48.911 00:06:48.911 ' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.911 --rc genhtml_branch_coverage=1 00:06:48.911 --rc genhtml_function_coverage=1 00:06:48.911 --rc genhtml_legend=1 00:06:48.911 --rc geninfo_all_blocks=1 00:06:48.911 --rc geninfo_unexecuted_blocks=1 00:06:48.911 00:06:48.911 ' 00:06:48.911 00:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.911 00:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115600 00:06:48.911 00:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:48.911 00:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115600 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 115600 ']' 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.911 00:34:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.911 [2024-12-07 00:34:04.946172] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:48.911 [2024-12-07 00:34:04.946277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115600 ] 00:06:48.911 [2024-12-07 00:34:05.015796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.169 [2024-12-07 00:34:05.063783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.169 00:34:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.169 00:34:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.169 00:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:49.732 00:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 115600 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 115600 ']' 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 115600 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115600 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115600' 00:06:49.733 killing process with pid 115600 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 115600 00:06:49.733 00:34:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 115600 00:06:49.990 00:06:49.990 real 0m1.291s 00:06:49.990 user 0m1.408s 00:06:49.990 sys 0m0.439s 00:06:49.990 00:34:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.990 00:34:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.990 ************************************ 00:06:49.990 END TEST alias_rpc 00:06:49.990 ************************************ 00:06:49.990 00:34:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:49.990 00:34:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:49.990 00:34:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.990 00:34:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.990 00:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.990 ************************************ 00:06:49.990 START TEST spdkcli_tcp 00:06:49.990 ************************************ 00:06:49.990 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:49.990 * Looking for test storage... 00:06:49.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:49.990 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.990 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.990 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.249 00:34:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.249 --rc genhtml_branch_coverage=1 00:06:50.249 --rc genhtml_function_coverage=1 00:06:50.249 --rc genhtml_legend=1 00:06:50.249 --rc geninfo_all_blocks=1 00:06:50.249 --rc geninfo_unexecuted_blocks=1 00:06:50.249 00:06:50.249 ' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.249 --rc genhtml_branch_coverage=1 00:06:50.249 --rc genhtml_function_coverage=1 00:06:50.249 --rc genhtml_legend=1 00:06:50.249 --rc geninfo_all_blocks=1 00:06:50.249 --rc geninfo_unexecuted_blocks=1 00:06:50.249 00:06:50.249 ' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.249 --rc genhtml_branch_coverage=1 00:06:50.249 --rc genhtml_function_coverage=1 00:06:50.249 --rc genhtml_legend=1 00:06:50.249 --rc geninfo_all_blocks=1 00:06:50.249 --rc geninfo_unexecuted_blocks=1 00:06:50.249 00:06:50.249 ' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.249 --rc genhtml_branch_coverage=1 00:06:50.249 --rc genhtml_function_coverage=1 00:06:50.249 --rc genhtml_legend=1 00:06:50.249 --rc geninfo_all_blocks=1 00:06:50.249 --rc geninfo_unexecuted_blocks=1 00:06:50.249 00:06:50.249 ' 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=115913 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:50.249 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 115913 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 115913 ']' 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.249 00:34:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.249 [2024-12-07 00:34:06.288169] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:50.249 [2024-12-07 00:34:06.288283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115913 ] 00:06:50.249 [2024-12-07 00:34:06.356640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.507 [2024-12-07 00:34:06.408016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.507 [2024-12-07 00:34:06.408020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.765 00:34:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.765 00:34:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:50.765 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=115923 00:06:50.765 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:50.765 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:51.024 [ 00:06:51.024 "bdev_malloc_delete", 00:06:51.024 "bdev_malloc_create", 00:06:51.024 "bdev_null_resize", 00:06:51.024 "bdev_null_delete", 00:06:51.024 "bdev_null_create", 00:06:51.024 "bdev_nvme_cuse_unregister", 00:06:51.024 "bdev_nvme_cuse_register", 00:06:51.024 "bdev_opal_new_user", 00:06:51.024 "bdev_opal_set_lock_state", 00:06:51.024 "bdev_opal_delete", 00:06:51.024 "bdev_opal_get_info", 00:06:51.024 "bdev_opal_create", 00:06:51.024 "bdev_nvme_opal_revert", 00:06:51.024 "bdev_nvme_opal_init", 00:06:51.024 "bdev_nvme_send_cmd", 00:06:51.024 "bdev_nvme_set_keys", 00:06:51.024 "bdev_nvme_get_path_iostat", 00:06:51.024 "bdev_nvme_get_mdns_discovery_info", 00:06:51.024 "bdev_nvme_stop_mdns_discovery", 00:06:51.024 "bdev_nvme_start_mdns_discovery", 00:06:51.024 "bdev_nvme_set_multipath_policy", 00:06:51.024 "bdev_nvme_set_preferred_path", 00:06:51.024 "bdev_nvme_get_io_paths", 00:06:51.024 "bdev_nvme_remove_error_injection", 00:06:51.024 "bdev_nvme_add_error_injection", 00:06:51.024 "bdev_nvme_get_discovery_info", 00:06:51.024 "bdev_nvme_stop_discovery", 00:06:51.024 "bdev_nvme_start_discovery", 00:06:51.024 "bdev_nvme_get_controller_health_info", 00:06:51.024 "bdev_nvme_disable_controller", 00:06:51.024 "bdev_nvme_enable_controller", 00:06:51.024 "bdev_nvme_reset_controller", 00:06:51.024 "bdev_nvme_get_transport_statistics", 00:06:51.024 "bdev_nvme_apply_firmware", 00:06:51.024 "bdev_nvme_detach_controller", 00:06:51.024 "bdev_nvme_get_controllers", 00:06:51.024 "bdev_nvme_attach_controller", 00:06:51.024 "bdev_nvme_set_hotplug", 00:06:51.024 "bdev_nvme_set_options", 00:06:51.024 "bdev_passthru_delete", 00:06:51.024 "bdev_passthru_create", 00:06:51.024 "bdev_lvol_set_parent_bdev", 00:06:51.024 "bdev_lvol_set_parent", 00:06:51.024 "bdev_lvol_check_shallow_copy", 00:06:51.024 "bdev_lvol_start_shallow_copy", 00:06:51.024 "bdev_lvol_grow_lvstore", 00:06:51.024 "bdev_lvol_get_lvols", 00:06:51.024 "bdev_lvol_get_lvstores", 00:06:51.024 "bdev_lvol_delete", 00:06:51.024 "bdev_lvol_set_read_only", 00:06:51.024 "bdev_lvol_resize", 00:06:51.024 "bdev_lvol_decouple_parent", 00:06:51.024 "bdev_lvol_inflate", 00:06:51.024 "bdev_lvol_rename", 00:06:51.024 "bdev_lvol_clone_bdev", 00:06:51.024 "bdev_lvol_clone", 00:06:51.024 "bdev_lvol_snapshot", 00:06:51.024 "bdev_lvol_create", 00:06:51.024 "bdev_lvol_delete_lvstore", 00:06:51.024 "bdev_lvol_rename_lvstore", 00:06:51.024 "bdev_lvol_create_lvstore", 00:06:51.024 "bdev_raid_set_options", 00:06:51.024 "bdev_raid_remove_base_bdev", 00:06:51.024 "bdev_raid_add_base_bdev", 00:06:51.024 "bdev_raid_delete", 00:06:51.024 "bdev_raid_create", 00:06:51.024 "bdev_raid_get_bdevs", 00:06:51.024 "bdev_error_inject_error", 00:06:51.024 "bdev_error_delete", 00:06:51.024 "bdev_error_create", 00:06:51.024 "bdev_split_delete", 00:06:51.024 "bdev_split_create", 00:06:51.024 "bdev_delay_delete", 00:06:51.024 "bdev_delay_create", 00:06:51.024 "bdev_delay_update_latency", 00:06:51.024 "bdev_zone_block_delete", 00:06:51.024 "bdev_zone_block_create", 00:06:51.024 "blobfs_create", 00:06:51.024 "blobfs_detect", 00:06:51.024 "blobfs_set_cache_size", 00:06:51.024 "bdev_aio_delete", 00:06:51.024 "bdev_aio_rescan", 00:06:51.024 "bdev_aio_create", 00:06:51.024 "bdev_ftl_set_property", 00:06:51.024 "bdev_ftl_get_properties", 00:06:51.024 "bdev_ftl_get_stats", 00:06:51.024 "bdev_ftl_unmap", 00:06:51.024 "bdev_ftl_unload", 00:06:51.024 "bdev_ftl_delete", 00:06:51.024 "bdev_ftl_load", 00:06:51.024 "bdev_ftl_create", 00:06:51.024 "bdev_virtio_attach_controller", 00:06:51.024 "bdev_virtio_scsi_get_devices", 00:06:51.024 "bdev_virtio_detach_controller", 00:06:51.024 "bdev_virtio_blk_set_hotplug", 00:06:51.024 "bdev_iscsi_delete", 00:06:51.024 "bdev_iscsi_create", 00:06:51.024 "bdev_iscsi_set_options", 00:06:51.024 "accel_error_inject_error", 00:06:51.024 "ioat_scan_accel_module", 00:06:51.024 "dsa_scan_accel_module", 00:06:51.024 "iaa_scan_accel_module", 00:06:51.024 "vfu_virtio_create_fs_endpoint", 00:06:51.024 "vfu_virtio_create_scsi_endpoint", 00:06:51.024 "vfu_virtio_scsi_remove_target", 00:06:51.024 "vfu_virtio_scsi_add_target", 00:06:51.024 "vfu_virtio_create_blk_endpoint", 00:06:51.024 "vfu_virtio_delete_endpoint", 00:06:51.024 "keyring_file_remove_key", 00:06:51.024 "keyring_file_add_key", 00:06:51.024 "keyring_linux_set_options", 00:06:51.024 "fsdev_aio_delete", 00:06:51.024 "fsdev_aio_create", 00:06:51.024 "iscsi_get_histogram", 00:06:51.024 "iscsi_enable_histogram", 00:06:51.024 "iscsi_set_options", 00:06:51.024 "iscsi_get_auth_groups", 00:06:51.024 "iscsi_auth_group_remove_secret", 00:06:51.024 "iscsi_auth_group_add_secret", 00:06:51.024 "iscsi_delete_auth_group", 00:06:51.024 "iscsi_create_auth_group", 00:06:51.024 "iscsi_set_discovery_auth", 00:06:51.024 "iscsi_get_options", 00:06:51.024 "iscsi_target_node_request_logout", 00:06:51.025 "iscsi_target_node_set_redirect", 00:06:51.025 "iscsi_target_node_set_auth", 00:06:51.025 "iscsi_target_node_add_lun", 00:06:51.025 "iscsi_get_stats", 00:06:51.025 "iscsi_get_connections", 00:06:51.025 "iscsi_portal_group_set_auth", 00:06:51.025 "iscsi_start_portal_group", 00:06:51.025 "iscsi_delete_portal_group", 00:06:51.025 "iscsi_create_portal_group", 00:06:51.025 "iscsi_get_portal_groups", 00:06:51.025 "iscsi_delete_target_node", 00:06:51.025 "iscsi_target_node_remove_pg_ig_maps", 00:06:51.025 "iscsi_target_node_add_pg_ig_maps", 00:06:51.025 "iscsi_create_target_node", 00:06:51.025 "iscsi_get_target_nodes", 00:06:51.025 "iscsi_delete_initiator_group", 00:06:51.025 "iscsi_initiator_group_remove_initiators", 00:06:51.025 "iscsi_initiator_group_add_initiators", 00:06:51.025 "iscsi_create_initiator_group", 00:06:51.025 "iscsi_get_initiator_groups", 00:06:51.025 "nvmf_set_crdt", 00:06:51.025 "nvmf_set_config", 00:06:51.025 "nvmf_set_max_subsystems", 00:06:51.025 "nvmf_stop_mdns_prr", 00:06:51.025 "nvmf_publish_mdns_prr", 00:06:51.025 "nvmf_subsystem_get_listeners", 00:06:51.025 "nvmf_subsystem_get_qpairs", 00:06:51.025 "nvmf_subsystem_get_controllers", 00:06:51.025 "nvmf_get_stats", 00:06:51.025 "nvmf_get_transports", 00:06:51.025 "nvmf_create_transport", 00:06:51.025 "nvmf_get_targets", 00:06:51.025 "nvmf_delete_target", 00:06:51.025 "nvmf_create_target", 00:06:51.025 "nvmf_subsystem_allow_any_host", 00:06:51.025 "nvmf_subsystem_set_keys", 00:06:51.025 "nvmf_subsystem_remove_host", 00:06:51.025 "nvmf_subsystem_add_host", 00:06:51.025 "nvmf_ns_remove_host", 00:06:51.025 "nvmf_ns_add_host", 00:06:51.025 "nvmf_subsystem_remove_ns", 00:06:51.025 "nvmf_subsystem_set_ns_ana_group", 00:06:51.025 "nvmf_subsystem_add_ns", 00:06:51.025 "nvmf_subsystem_listener_set_ana_state", 00:06:51.025 "nvmf_discovery_get_referrals", 00:06:51.025 "nvmf_discovery_remove_referral", 00:06:51.025 "nvmf_discovery_add_referral", 00:06:51.025 "nvmf_subsystem_remove_listener", 00:06:51.025 "nvmf_subsystem_add_listener", 00:06:51.025 "nvmf_delete_subsystem", 00:06:51.025 "nvmf_create_subsystem", 00:06:51.025 "nvmf_get_subsystems", 00:06:51.025 "env_dpdk_get_mem_stats", 00:06:51.025 "nbd_get_disks", 00:06:51.025 "nbd_stop_disk", 00:06:51.025 "nbd_start_disk", 00:06:51.025 "ublk_recover_disk", 00:06:51.025 "ublk_get_disks", 00:06:51.025 "ublk_stop_disk", 00:06:51.025 "ublk_start_disk", 00:06:51.025 "ublk_destroy_target", 00:06:51.025 "ublk_create_target", 00:06:51.025 "virtio_blk_create_transport", 00:06:51.025 "virtio_blk_get_transports", 00:06:51.025 "vhost_controller_set_coalescing", 00:06:51.025 "vhost_get_controllers", 00:06:51.025 "vhost_delete_controller", 00:06:51.025 "vhost_create_blk_controller", 00:06:51.025 "vhost_scsi_controller_remove_target", 00:06:51.025 "vhost_scsi_controller_add_target", 00:06:51.025 "vhost_start_scsi_controller", 00:06:51.025 "vhost_create_scsi_controller", 00:06:51.025 "thread_set_cpumask", 00:06:51.025 "scheduler_set_options", 00:06:51.025 "framework_get_governor", 00:06:51.025 "framework_get_scheduler", 00:06:51.025 "framework_set_scheduler", 00:06:51.025 "framework_get_reactors", 00:06:51.025 "thread_get_io_channels", 00:06:51.025 "thread_get_pollers", 00:06:51.025 "thread_get_stats", 00:06:51.025 "framework_monitor_context_switch", 00:06:51.025 "spdk_kill_instance", 00:06:51.025 "log_enable_timestamps", 00:06:51.025 "log_get_flags", 00:06:51.025 "log_clear_flag", 00:06:51.025 "log_set_flag", 00:06:51.025 "log_get_level", 00:06:51.025 "log_set_level", 00:06:51.025 "log_get_print_level", 00:06:51.025 "log_set_print_level", 00:06:51.025 "framework_enable_cpumask_locks", 00:06:51.025 "framework_disable_cpumask_locks", 00:06:51.025 "framework_wait_init", 00:06:51.025 "framework_start_init", 00:06:51.025 "scsi_get_devices", 00:06:51.025 "bdev_get_histogram", 00:06:51.025 "bdev_enable_histogram", 00:06:51.025 "bdev_set_qos_limit", 00:06:51.025 "bdev_set_qd_sampling_period", 00:06:51.025 "bdev_get_bdevs", 00:06:51.025 "bdev_reset_iostat", 00:06:51.025 "bdev_get_iostat", 00:06:51.025 "bdev_examine", 00:06:51.025 "bdev_wait_for_examine", 00:06:51.025 "bdev_set_options", 00:06:51.025 "accel_get_stats", 00:06:51.025 "accel_set_options", 00:06:51.025 "accel_set_driver", 00:06:51.025 "accel_crypto_key_destroy", 00:06:51.025 "accel_crypto_keys_get", 00:06:51.025 "accel_crypto_key_create", 00:06:51.025 "accel_assign_opc", 00:06:51.025 "accel_get_module_info", 00:06:51.025 "accel_get_opc_assignments", 00:06:51.025 "vmd_rescan", 00:06:51.025 "vmd_remove_device", 00:06:51.025 "vmd_enable", 00:06:51.025 "sock_get_default_impl", 00:06:51.025 "sock_set_default_impl", 00:06:51.025 "sock_impl_set_options", 00:06:51.025 "sock_impl_get_options", 00:06:51.025 "iobuf_get_stats", 00:06:51.025 "iobuf_set_options", 00:06:51.025 "keyring_get_keys", 00:06:51.025 "vfu_tgt_set_base_path", 00:06:51.025 "framework_get_pci_devices", 00:06:51.025 "framework_get_config", 00:06:51.025 "framework_get_subsystems", 00:06:51.025 "fsdev_set_opts", 00:06:51.025 "fsdev_get_opts", 00:06:51.025 "trace_get_info", 00:06:51.025 "trace_get_tpoint_group_mask", 00:06:51.025 "trace_disable_tpoint_group", 00:06:51.025 "trace_enable_tpoint_group", 00:06:51.025 "trace_clear_tpoint_mask", 00:06:51.025 "trace_set_tpoint_mask", 00:06:51.025 "notify_get_notifications", 00:06:51.025 "notify_get_types", 00:06:51.025 "spdk_get_version", 00:06:51.025 "rpc_get_methods" 00:06:51.025 ] 00:06:51.025 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.025 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:51.025 00:34:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 115913 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 115913 ']' 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 115913 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115913 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115913' 00:06:51.025 killing process with pid 115913 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 115913 00:06:51.025 00:34:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 115913 00:06:51.283 00:06:51.283 real 0m1.288s 00:06:51.283 user 0m2.281s 00:06:51.283 sys 0m0.514s 00:06:51.283 00:34:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.283 00:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.283 ************************************ 00:06:51.283 END TEST spdkcli_tcp 00:06:51.283 ************************************ 00:06:51.283 00:34:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:51.283 00:34:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.283 00:34:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.283 00:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.283 ************************************ 00:06:51.283 START TEST dpdk_mem_utility 00:06:51.283 ************************************ 00:06:51.283 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:51.542 * Looking for test storage... 00:06:51.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.542 00:34:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.542 --rc genhtml_branch_coverage=1 00:06:51.542 --rc genhtml_function_coverage=1 00:06:51.542 --rc genhtml_legend=1 00:06:51.542 --rc geninfo_all_blocks=1 00:06:51.542 --rc geninfo_unexecuted_blocks=1 00:06:51.542 00:06:51.542 ' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.542 --rc genhtml_branch_coverage=1 00:06:51.542 --rc genhtml_function_coverage=1 00:06:51.542 --rc genhtml_legend=1 00:06:51.542 --rc geninfo_all_blocks=1 00:06:51.542 --rc geninfo_unexecuted_blocks=1 00:06:51.542 00:06:51.542 ' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.542 --rc genhtml_branch_coverage=1 00:06:51.542 --rc genhtml_function_coverage=1 00:06:51.542 --rc genhtml_legend=1 00:06:51.542 --rc geninfo_all_blocks=1 00:06:51.542 --rc geninfo_unexecuted_blocks=1 00:06:51.542 00:06:51.542 ' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.542 --rc genhtml_branch_coverage=1 00:06:51.542 --rc genhtml_function_coverage=1 00:06:51.542 --rc genhtml_legend=1 00:06:51.542 --rc geninfo_all_blocks=1 00:06:51.542 --rc geninfo_unexecuted_blocks=1 00:06:51.542 00:06:51.542 ' 00:06:51.542 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:51.542 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=116123 00:06:51.542 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.542 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 116123 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 116123 ']' 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.542 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.542 [2024-12-07 00:34:07.620331] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:51.542 [2024-12-07 00:34:07.620419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116123 ] 00:06:51.542 [2024-12-07 00:34:07.690654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.799 [2024-12-07 00:34:07.734969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.058 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.058 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:52.058 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:52.058 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:52.058 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.058 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.058 { 00:06:52.058 "filename": "/tmp/spdk_mem_dump.txt" 00:06:52.058 } 00:06:52.058 00:34:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.058 00:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:52.058 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:52.058 1 heaps totaling size 818.000000 MiB 00:06:52.058 size: 818.000000 MiB heap id: 0 00:06:52.058 end heaps---------- 00:06:52.058 9 mempools totaling size 603.782043 MiB 00:06:52.058 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:52.058 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:52.058 size: 100.555481 MiB name: bdev_io_116123 00:06:52.058 size: 50.003479 MiB name: msgpool_116123 00:06:52.058 size: 36.509338 MiB name: fsdev_io_116123 00:06:52.058 size: 21.763794 MiB name: PDU_Pool 00:06:52.058 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:52.058 size: 4.133484 MiB name: evtpool_116123 00:06:52.058 size: 0.026123 MiB name: Session_Pool 00:06:52.058 end mempools------- 00:06:52.058 6 memzones totaling size 4.142822 MiB 00:06:52.058 size: 1.000366 MiB name: RG_ring_0_116123 00:06:52.058 size: 1.000366 MiB name: RG_ring_1_116123 00:06:52.058 size: 1.000366 MiB name: RG_ring_4_116123 00:06:52.058 size: 1.000366 MiB name: RG_ring_5_116123 00:06:52.058 size: 0.125366 MiB name: RG_ring_2_116123 00:06:52.058 size: 0.015991 MiB name: RG_ring_3_116123 00:06:52.058 end memzones------- 00:06:52.058 00:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:52.058 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:52.058 list of free elements. size: 10.852478 MiB 00:06:52.058 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:52.058 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:52.058 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:52.058 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:52.058 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:52.058 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:52.058 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:52.058 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:52.058 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:52.058 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:52.058 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:52.058 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:52.058 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:52.058 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:52.058 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:52.058 list of standard malloc elements. size: 199.218628 MiB 00:06:52.058 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:52.058 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:52.058 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:52.058 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:52.058 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:52.058 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:52.058 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:52.058 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:52.058 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:52.058 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:52.058 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:52.058 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:52.058 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:52.058 list of memzone associated elements. size: 607.928894 MiB 00:06:52.058 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:52.058 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:52.058 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:52.058 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:52.058 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:52.058 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_116123_0 00:06:52.058 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:52.058 associated memzone info: size: 48.002930 MiB name: MP_msgpool_116123_0 00:06:52.058 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:52.058 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_116123_0 00:06:52.058 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:52.058 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:52.058 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:52.058 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:52.058 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:52.058 associated memzone info: size: 3.000122 MiB name: MP_evtpool_116123_0 00:06:52.058 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:52.058 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_116123 00:06:52.058 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:52.058 associated memzone info: size: 1.007996 MiB name: MP_evtpool_116123 00:06:52.059 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:52.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:52.059 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:52.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:52.059 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:52.059 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:52.059 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:52.059 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:52.059 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:52.059 associated memzone info: size: 1.000366 MiB name: RG_ring_0_116123 00:06:52.059 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:52.059 associated memzone info: size: 1.000366 MiB name: RG_ring_1_116123 00:06:52.059 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:52.059 associated memzone info: size: 1.000366 MiB name: RG_ring_4_116123 00:06:52.059 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:52.059 associated memzone info: size: 1.000366 MiB name: RG_ring_5_116123 00:06:52.059 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:52.059 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_116123 00:06:52.059 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:52.059 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_116123 00:06:52.059 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:52.059 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:52.059 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:52.059 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:52.059 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:52.059 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:52.059 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:52.059 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_116123 00:06:52.059 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:52.059 associated memzone info: size: 0.125366 MiB name: RG_ring_2_116123 00:06:52.059 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:52.059 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:52.059 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:52.059 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:52.059 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:52.059 associated memzone info: size: 0.015991 MiB name: RG_ring_3_116123 00:06:52.059 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:52.059 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:52.059 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:52.059 associated memzone info: size: 0.000183 MiB name: MP_msgpool_116123 00:06:52.059 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:52.059 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_116123 00:06:52.059 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:52.059 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_116123 00:06:52.059 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:52.059 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:52.059 00:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:52.059 00:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 116123 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 116123 ']' 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 116123 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116123 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116123' 00:06:52.059 killing process with pid 116123 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 116123 00:06:52.059 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 116123 00:06:52.624 00:06:52.624 real 0m1.087s 00:06:52.624 user 0m1.058s 00:06:52.624 sys 0m0.423s 00:06:52.624 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.624 00:34:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.624 ************************************ 00:06:52.624 END TEST dpdk_mem_utility 00:06:52.624 ************************************ 00:06:52.624 00:34:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:52.624 00:34:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.624 00:34:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.624 00:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.624 ************************************ 00:06:52.624 START TEST event 00:06:52.624 ************************************ 00:06:52.624 00:34:08 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:52.624 * Looking for test storage... 00:06:52.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.625 00:34:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.625 00:34:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.625 00:34:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.625 00:34:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.625 00:34:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.625 00:34:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.625 00:34:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.625 00:34:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.625 00:34:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.625 00:34:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.625 00:34:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.625 00:34:08 event -- scripts/common.sh@344 -- # case "$op" in 00:06:52.625 00:34:08 event -- scripts/common.sh@345 -- # : 1 00:06:52.625 00:34:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.625 00:34:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.625 00:34:08 event -- scripts/common.sh@365 -- # decimal 1 00:06:52.625 00:34:08 event -- scripts/common.sh@353 -- # local d=1 00:06:52.625 00:34:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.625 00:34:08 event -- scripts/common.sh@355 -- # echo 1 00:06:52.625 00:34:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.625 00:34:08 event -- scripts/common.sh@366 -- # decimal 2 00:06:52.625 00:34:08 event -- scripts/common.sh@353 -- # local d=2 00:06:52.625 00:34:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.625 00:34:08 event -- scripts/common.sh@355 -- # echo 2 00:06:52.625 00:34:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.625 00:34:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.625 00:34:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.625 00:34:08 event -- scripts/common.sh@368 -- # return 0 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.625 --rc genhtml_branch_coverage=1 00:06:52.625 --rc genhtml_function_coverage=1 00:06:52.625 --rc genhtml_legend=1 00:06:52.625 --rc geninfo_all_blocks=1 00:06:52.625 --rc geninfo_unexecuted_blocks=1 00:06:52.625 00:06:52.625 ' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.625 --rc genhtml_branch_coverage=1 00:06:52.625 --rc genhtml_function_coverage=1 00:06:52.625 --rc genhtml_legend=1 00:06:52.625 --rc geninfo_all_blocks=1 00:06:52.625 --rc geninfo_unexecuted_blocks=1 00:06:52.625 00:06:52.625 ' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.625 --rc genhtml_branch_coverage=1 00:06:52.625 --rc genhtml_function_coverage=1 00:06:52.625 --rc genhtml_legend=1 00:06:52.625 --rc geninfo_all_blocks=1 00:06:52.625 --rc geninfo_unexecuted_blocks=1 00:06:52.625 00:06:52.625 ' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.625 --rc genhtml_branch_coverage=1 00:06:52.625 --rc genhtml_function_coverage=1 00:06:52.625 --rc genhtml_legend=1 00:06:52.625 --rc geninfo_all_blocks=1 00:06:52.625 --rc geninfo_unexecuted_blocks=1 00:06:52.625 00:06:52.625 ' 00:06:52.625 00:34:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:52.625 00:34:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.625 00:34:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:52.625 00:34:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.625 00:34:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.625 ************************************ 00:06:52.625 START TEST event_perf 00:06:52.625 ************************************ 00:06:52.625 00:34:08 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.625 Running I/O for 1 seconds...[2024-12-07 00:34:08.751017] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:52.625 [2024-12-07 00:34:08.751084] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116325 ] 00:06:52.883 [2024-12-07 00:34:08.819018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.883 [2024-12-07 00:34:08.866838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.883 [2024-12-07 00:34:08.866944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.883 [2024-12-07 00:34:08.867046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.883 [2024-12-07 00:34:08.867050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.818 Running I/O for 1 seconds... 00:06:53.818 lcore 0: 231239 00:06:53.818 lcore 1: 231239 00:06:53.818 lcore 2: 231240 00:06:53.818 lcore 3: 231238 00:06:53.818 done. 00:06:53.818 00:06:53.818 real 0m1.177s 00:06:53.818 user 0m4.100s 00:06:53.818 sys 0m0.071s 00:06:53.818 00:34:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.818 00:34:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.818 ************************************ 00:06:53.818 END TEST event_perf 00:06:53.818 ************************************ 00:06:53.818 00:34:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.818 00:34:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:53.818 00:34:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.818 00:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.818 ************************************ 00:06:53.818 START TEST event_reactor 00:06:53.818 ************************************ 00:06:53.818 00:34:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.818 [2024-12-07 00:34:09.966495] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:53.818 [2024-12-07 00:34:09.966564] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116480 ] 00:06:54.076 [2024-12-07 00:34:10.038758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.076 [2024-12-07 00:34:10.093148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.018 test_start 00:06:55.018 oneshot 00:06:55.018 tick 100 00:06:55.018 tick 100 00:06:55.018 tick 250 00:06:55.019 tick 100 00:06:55.019 tick 100 00:06:55.019 tick 250 00:06:55.019 tick 500 00:06:55.019 tick 100 00:06:55.019 tick 100 00:06:55.019 tick 100 00:06:55.019 tick 250 00:06:55.019 tick 100 00:06:55.019 tick 100 00:06:55.019 test_end 00:06:55.019 00:06:55.019 real 0m1.182s 00:06:55.019 user 0m1.115s 00:06:55.019 sys 0m0.063s 00:06:55.019 00:34:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.019 00:34:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:55.019 ************************************ 00:06:55.019 END TEST event_reactor 00:06:55.019 ************************************ 00:06:55.019 00:34:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.019 00:34:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:55.019 00:34:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.019 00:34:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.278 ************************************ 00:06:55.278 START TEST event_reactor_perf 00:06:55.278 ************************************ 00:06:55.278 00:34:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.278 [2024-12-07 00:34:11.201966] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:55.278 [2024-12-07 00:34:11.202058] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116634 ] 00:06:55.278 [2024-12-07 00:34:11.268382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.278 [2024-12-07 00:34:11.313218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.209 test_start 00:06:56.210 test_end 00:06:56.210 Performance: 449053 events per second 00:06:56.210 00:06:56.210 real 0m1.169s 00:06:56.210 user 0m1.102s 00:06:56.210 sys 0m0.063s 00:06:56.210 00:34:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.210 00:34:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.210 ************************************ 00:06:56.210 END TEST event_reactor_perf 00:06:56.210 ************************************ 00:06:56.468 00:34:12 event -- event/event.sh@49 -- # uname -s 00:06:56.468 00:34:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:56.468 00:34:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.468 00:34:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.468 00:34:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.468 00:34:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.468 ************************************ 00:06:56.468 START TEST event_scheduler 00:06:56.468 ************************************ 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.468 * Looking for test storage... 00:06:56.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.468 00:34:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.468 --rc genhtml_branch_coverage=1 00:06:56.468 --rc genhtml_function_coverage=1 00:06:56.468 --rc genhtml_legend=1 00:06:56.468 --rc geninfo_all_blocks=1 00:06:56.468 --rc geninfo_unexecuted_blocks=1 00:06:56.468 00:06:56.468 ' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.468 --rc genhtml_branch_coverage=1 00:06:56.468 --rc genhtml_function_coverage=1 00:06:56.468 --rc genhtml_legend=1 00:06:56.468 --rc geninfo_all_blocks=1 00:06:56.468 --rc geninfo_unexecuted_blocks=1 00:06:56.468 00:06:56.468 ' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.468 --rc genhtml_branch_coverage=1 00:06:56.468 --rc genhtml_function_coverage=1 00:06:56.468 --rc genhtml_legend=1 00:06:56.468 --rc geninfo_all_blocks=1 00:06:56.468 --rc geninfo_unexecuted_blocks=1 00:06:56.468 00:06:56.468 ' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.468 --rc genhtml_branch_coverage=1 00:06:56.468 --rc genhtml_function_coverage=1 00:06:56.468 --rc genhtml_legend=1 00:06:56.468 --rc geninfo_all_blocks=1 00:06:56.468 --rc geninfo_unexecuted_blocks=1 00:06:56.468 00:06:56.468 ' 00:06:56.468 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:56.468 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=116820 00:06:56.468 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:56.468 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.468 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 116820 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 116820 ']' 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.468 00:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.468 [2024-12-07 00:34:12.600430] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:56.468 [2024-12-07 00:34:12.600513] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116820 ] 00:06:56.727 [2024-12-07 00:34:12.671846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.727 [2024-12-07 00:34:12.724472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.727 [2024-12-07 00:34:12.724528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.727 [2024-12-07 00:34:12.727018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.727 [2024-12-07 00:34:12.727046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:56.727 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.727 [2024-12-07 00:34:12.847967] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:56.727 [2024-12-07 00:34:12.848020] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.727 [2024-12-07 00:34:12.848054] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.727 [2024-12-07 00:34:12.848078] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.727 [2024-12-07 00:34:12.848089] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.727 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.727 00:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 [2024-12-07 00:34:12.948064] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.986 00:34:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.986 00:34:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.986 00:34:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.986 00:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 ************************************ 00:06:56.986 START TEST scheduler_create_thread 00:06:56.986 ************************************ 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 2 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 3 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 4 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 5 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 6 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 7 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 8 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 9 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 10 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.986 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.553 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.553 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:57.553 00:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:57.553 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.553 00:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.925 00:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.925 00:06:58.925 real 0m1.753s 00:06:58.925 user 0m0.013s 00:06:58.925 sys 0m0.006s 00:06:58.925 00:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.925 00:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.925 ************************************ 00:06:58.925 END TEST scheduler_create_thread 00:06:58.925 ************************************ 00:06:58.925 00:34:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:58.925 00:34:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 116820 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 116820 ']' 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 116820 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116820 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116820' 00:06:58.925 killing process with pid 116820 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 116820 00:06:58.925 00:34:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 116820 00:06:59.182 [2024-12-07 00:34:15.211637] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:59.440 00:06:59.440 real 0m3.002s 00:06:59.440 user 0m4.070s 00:06:59.440 sys 0m0.362s 00:06:59.440 00:34:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.440 00:34:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.440 ************************************ 00:06:59.440 END TEST event_scheduler 00:06:59.440 ************************************ 00:06:59.440 00:34:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:59.440 00:34:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:59.440 00:34:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.440 00:34:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.440 00:34:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.440 ************************************ 00:06:59.440 START TEST app_repeat 00:06:59.440 ************************************ 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=117267 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 117267' 00:06:59.440 Process app_repeat pid: 117267 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:59.440 spdk_app_start Round 0 00:06:59.440 00:34:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117267 /var/tmp/spdk-nbd.sock 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117267 ']' 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.440 00:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.440 [2024-12-07 00:34:15.493768] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:59.440 [2024-12-07 00:34:15.493832] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117267 ] 00:06:59.440 [2024-12-07 00:34:15.556766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.698 [2024-12-07 00:34:15.602222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.698 [2024-12-07 00:34:15.602226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.698 00:34:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.698 00:34:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.698 00:34:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.956 Malloc0 00:06:59.956 00:34:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.212 Malloc1 00:07:00.212 00:34:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.212 00:34:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.470 /dev/nbd0 00:07:00.727 00:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.727 00:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.727 1+0 records in 00:07:00.727 1+0 records out 00:07:00.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217419 s, 18.8 MB/s 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.727 00:34:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.727 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.727 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.727 00:34:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.983 /dev/nbd1 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.983 1+0 records in 00:07:00.983 1+0 records out 00:07:00.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264094 s, 15.5 MB/s 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.983 00:34:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.983 00:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.241 { 00:07:01.241 "nbd_device": "/dev/nbd0", 00:07:01.241 "bdev_name": "Malloc0" 00:07:01.241 }, 00:07:01.241 { 00:07:01.241 "nbd_device": "/dev/nbd1", 00:07:01.241 "bdev_name": "Malloc1" 00:07:01.241 } 00:07:01.241 ]' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.241 { 00:07:01.241 "nbd_device": "/dev/nbd0", 00:07:01.241 "bdev_name": "Malloc0" 00:07:01.241 }, 00:07:01.241 { 00:07:01.241 "nbd_device": "/dev/nbd1", 00:07:01.241 "bdev_name": "Malloc1" 00:07:01.241 } 00:07:01.241 ]' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.241 /dev/nbd1' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.241 /dev/nbd1' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.241 256+0 records in 00:07:01.241 256+0 records out 00:07:01.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515259 s, 204 MB/s 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.241 256+0 records in 00:07:01.241 256+0 records out 00:07:01.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203883 s, 51.4 MB/s 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.241 256+0 records in 00:07:01.241 256+0 records out 00:07:01.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220986 s, 47.4 MB/s 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.241 00:34:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.242 00:34:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.242 00:34:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.511 00:34:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.076 00:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.076 00:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.076 00:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.076 00:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.333 00:34:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.333 00:34:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.590 00:34:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.846 [2024-12-07 00:34:18.748319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.846 [2024-12-07 00:34:18.792094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.846 [2024-12-07 00:34:18.792094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.846 [2024-12-07 00:34:18.850106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.846 [2024-12-07 00:34:18.850170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.121 00:34:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.121 00:34:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:06.121 spdk_app_start Round 1 00:07:06.121 00:34:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117267 /var/tmp/spdk-nbd.sock 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117267 ']' 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.121 00:34:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.121 00:34:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.121 Malloc0 00:07:06.121 00:34:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.378 Malloc1 00:07:06.378 00:34:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.378 00:34:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.636 /dev/nbd0 00:07:06.636 00:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.636 00:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.636 1+0 records in 00:07:06.636 1+0 records out 00:07:06.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210245 s, 19.5 MB/s 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.636 00:34:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.636 00:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.636 00:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.636 00:34:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.894 /dev/nbd1 00:07:06.894 00:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.894 00:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.894 00:34:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.894 00:34:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.894 00:34:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.152 1+0 records in 00:07:07.152 1+0 records out 00:07:07.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206697 s, 19.8 MB/s 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.152 00:34:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.152 00:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.152 00:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.152 00:34:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.152 00:34:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.152 00:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.411 { 00:07:07.411 "nbd_device": "/dev/nbd0", 00:07:07.411 "bdev_name": "Malloc0" 00:07:07.411 }, 00:07:07.411 { 00:07:07.411 "nbd_device": "/dev/nbd1", 00:07:07.411 "bdev_name": "Malloc1" 00:07:07.411 } 00:07:07.411 ]' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.411 { 00:07:07.411 "nbd_device": "/dev/nbd0", 00:07:07.411 "bdev_name": "Malloc0" 00:07:07.411 }, 00:07:07.411 { 00:07:07.411 "nbd_device": "/dev/nbd1", 00:07:07.411 "bdev_name": "Malloc1" 00:07:07.411 } 00:07:07.411 ]' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.411 /dev/nbd1' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.411 /dev/nbd1' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.411 256+0 records in 00:07:07.411 256+0 records out 00:07:07.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508046 s, 206 MB/s 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.411 256+0 records in 00:07:07.411 256+0 records out 00:07:07.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200532 s, 52.3 MB/s 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.411 256+0 records in 00:07:07.411 256+0 records out 00:07:07.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217289 s, 48.3 MB/s 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.411 00:34:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.669 00:34:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.927 00:34:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.928 00:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.186 00:34:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.186 00:34:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.755 00:34:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.755 [2024-12-07 00:34:24.809007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.755 [2024-12-07 00:34:24.852029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.755 [2024-12-07 00:34:24.852029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.014 [2024-12-07 00:34:24.904989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.014 [2024-12-07 00:34:24.905064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:11.544 00:34:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.544 00:34:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:11.544 spdk_app_start Round 2 00:07:11.544 00:34:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 117267 /var/tmp/spdk-nbd.sock 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117267 ']' 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.544 00:34:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.803 00:34:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.803 00:34:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:11.803 00:34:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.061 Malloc0 00:07:12.061 00:34:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.320 Malloc1 00:07:12.320 00:34:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.320 00:34:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.886 /dev/nbd0 00:07:12.886 00:34:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.886 00:34:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.886 00:34:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.886 00:34:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.886 00:34:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.887 1+0 records in 00:07:12.887 1+0 records out 00:07:12.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268857 s, 15.2 MB/s 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.887 00:34:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.887 00:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.887 00:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.887 00:34:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.145 /dev/nbd1 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.145 1+0 records in 00:07:13.145 1+0 records out 00:07:13.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219676 s, 18.6 MB/s 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.145 00:34:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.145 00:34:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.404 { 00:07:13.404 "nbd_device": "/dev/nbd0", 00:07:13.404 "bdev_name": "Malloc0" 00:07:13.404 }, 00:07:13.404 { 00:07:13.404 "nbd_device": "/dev/nbd1", 00:07:13.404 "bdev_name": "Malloc1" 00:07:13.404 } 00:07:13.404 ]' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.404 { 00:07:13.404 "nbd_device": "/dev/nbd0", 00:07:13.404 "bdev_name": "Malloc0" 00:07:13.404 }, 00:07:13.404 { 00:07:13.404 "nbd_device": "/dev/nbd1", 00:07:13.404 "bdev_name": "Malloc1" 00:07:13.404 } 00:07:13.404 ]' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.404 /dev/nbd1' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.404 /dev/nbd1' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.404 256+0 records in 00:07:13.404 256+0 records out 00:07:13.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049515 s, 212 MB/s 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.404 256+0 records in 00:07:13.404 256+0 records out 00:07:13.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198716 s, 52.8 MB/s 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.404 256+0 records in 00:07:13.404 256+0 records out 00:07:13.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221344 s, 47.4 MB/s 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.404 00:34:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.663 00:34:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.922 00:34:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.180 00:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.439 00:34:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.439 00:34:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.698 00:34:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.957 [2024-12-07 00:34:30.871477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.957 [2024-12-07 00:34:30.914557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.957 [2024-12-07 00:34:30.914559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.957 [2024-12-07 00:34:30.971908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.958 [2024-12-07 00:34:30.971970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.243 00:34:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 117267 /var/tmp/spdk-nbd.sock 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 117267 ']' 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.243 00:34:33 event.app_repeat -- event/event.sh@39 -- # killprocess 117267 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 117267 ']' 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 117267 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117267 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.243 00:34:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.244 00:34:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117267' 00:07:18.244 killing process with pid 117267 00:07:18.244 00:34:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 117267 00:07:18.244 00:34:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 117267 00:07:18.244 spdk_app_start is called in Round 0. 00:07:18.244 Shutdown signal received, stop current app iteration 00:07:18.244 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:07:18.244 spdk_app_start is called in Round 1. 00:07:18.244 Shutdown signal received, stop current app iteration 00:07:18.244 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:07:18.244 spdk_app_start is called in Round 2. 00:07:18.244 Shutdown signal received, stop current app iteration 00:07:18.244 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:07:18.244 spdk_app_start is called in Round 3. 00:07:18.244 Shutdown signal received, stop current app iteration 00:07:18.244 00:34:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:18.244 00:34:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:18.244 00:07:18.244 real 0m18.692s 00:07:18.244 user 0m41.349s 00:07:18.244 sys 0m3.326s 00:07:18.244 00:34:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.244 00:34:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.244 ************************************ 00:07:18.244 END TEST app_repeat 00:07:18.244 ************************************ 00:07:18.244 00:34:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:18.244 00:34:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:18.244 00:34:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.244 00:34:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.244 00:34:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.244 ************************************ 00:07:18.244 START TEST cpu_locks 00:07:18.244 ************************************ 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:18.244 * Looking for test storage... 00:07:18.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.244 00:34:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.244 --rc genhtml_branch_coverage=1 00:07:18.244 --rc genhtml_function_coverage=1 00:07:18.244 --rc genhtml_legend=1 00:07:18.244 --rc geninfo_all_blocks=1 00:07:18.244 --rc geninfo_unexecuted_blocks=1 00:07:18.244 00:07:18.244 ' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.244 --rc genhtml_branch_coverage=1 00:07:18.244 --rc genhtml_function_coverage=1 00:07:18.244 --rc genhtml_legend=1 00:07:18.244 --rc geninfo_all_blocks=1 00:07:18.244 --rc geninfo_unexecuted_blocks=1 00:07:18.244 00:07:18.244 ' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.244 --rc genhtml_branch_coverage=1 00:07:18.244 --rc genhtml_function_coverage=1 00:07:18.244 --rc genhtml_legend=1 00:07:18.244 --rc geninfo_all_blocks=1 00:07:18.244 --rc geninfo_unexecuted_blocks=1 00:07:18.244 00:07:18.244 ' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.244 --rc genhtml_branch_coverage=1 00:07:18.244 --rc genhtml_function_coverage=1 00:07:18.244 --rc genhtml_legend=1 00:07:18.244 --rc geninfo_all_blocks=1 00:07:18.244 --rc geninfo_unexecuted_blocks=1 00:07:18.244 00:07:18.244 ' 00:07:18.244 00:34:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:18.244 00:34:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:18.244 00:34:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:18.244 00:34:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.244 00:34:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.244 ************************************ 00:07:18.244 START TEST default_locks 00:07:18.244 ************************************ 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=119761 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 119761 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 119761 ']' 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.244 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.245 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.245 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.245 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.503 [2024-12-07 00:34:34.439722] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:18.503 [2024-12-07 00:34:34.439801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119761 ] 00:07:18.503 [2024-12-07 00:34:34.506384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.503 [2024-12-07 00:34:34.549977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.761 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.761 00:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:18.761 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 119761 00:07:18.761 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 119761 00:07:18.761 00:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.028 lslocks: write error 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 119761 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 119761 ']' 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 119761 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119761 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119761' 00:07:19.028 killing process with pid 119761 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 119761 00:07:19.028 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 119761 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 119761 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 119761 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 119761 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 119761 ']' 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (119761) - No such process 00:07:19.598 ERROR: process (pid: 119761) is no longer running 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.598 00:07:19.598 real 0m1.105s 00:07:19.598 user 0m1.069s 00:07:19.598 sys 0m0.486s 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.598 00:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.598 ************************************ 00:07:19.598 END TEST default_locks 00:07:19.598 ************************************ 00:07:19.598 00:34:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:19.598 00:34:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.598 00:34:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.598 00:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.598 ************************************ 00:07:19.598 START TEST default_locks_via_rpc 00:07:19.598 ************************************ 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=119925 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 119925 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 119925 ']' 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.598 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.598 [2024-12-07 00:34:35.598376] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:19.598 [2024-12-07 00:34:35.598468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119925 ] 00:07:19.598 [2024-12-07 00:34:35.663026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.598 [2024-12-07 00:34:35.705537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 119925 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 119925 00:07:19.857 00:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 119925 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 119925 ']' 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 119925 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.115 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119925 00:07:20.373 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.373 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.373 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119925' 00:07:20.373 killing process with pid 119925 00:07:20.373 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 119925 00:07:20.373 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 119925 00:07:20.631 00:07:20.631 real 0m1.111s 00:07:20.631 user 0m1.087s 00:07:20.631 sys 0m0.477s 00:07:20.631 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.631 00:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.631 ************************************ 00:07:20.631 END TEST default_locks_via_rpc 00:07:20.631 ************************************ 00:07:20.631 00:34:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:20.631 00:34:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.631 00:34:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.631 00:34:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.631 ************************************ 00:07:20.631 START TEST non_locking_app_on_locked_coremask 00:07:20.631 ************************************ 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=120085 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 120085 /var/tmp/spdk.sock 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120085 ']' 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.631 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.632 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.632 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.632 00:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.632 [2024-12-07 00:34:36.759890] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:20.632 [2024-12-07 00:34:36.759973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120085 ] 00:07:20.890 [2024-12-07 00:34:36.828048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.890 [2024-12-07 00:34:36.876682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=120099 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 120099 /var/tmp/spdk2.sock 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120099 ']' 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.149 00:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 [2024-12-07 00:34:37.178887] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:21.149 [2024-12-07 00:34:37.178973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120099 ] 00:07:21.149 [2024-12-07 00:34:37.279460] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.149 [2024-12-07 00:34:37.279486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.406 [2024-12-07 00:34:37.364502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.335 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.335 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.335 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 120085 00:07:22.335 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 120085 00:07:22.335 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.592 lslocks: write error 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 120085 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120085 ']' 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120085 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120085 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120085' 00:07:22.592 killing process with pid 120085 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120085 00:07:22.592 00:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120085 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 120099 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120099 ']' 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120099 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120099 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120099' 00:07:23.524 killing process with pid 120099 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120099 00:07:23.524 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120099 00:07:23.782 00:07:23.782 real 0m3.080s 00:07:23.782 user 0m3.305s 00:07:23.782 sys 0m0.996s 00:07:23.782 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.782 00:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.782 ************************************ 00:07:23.782 END TEST non_locking_app_on_locked_coremask 00:07:23.782 ************************************ 00:07:23.782 00:34:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:23.782 00:34:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.782 00:34:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.782 00:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.782 ************************************ 00:07:23.782 START TEST locking_app_on_unlocked_coremask 00:07:23.782 ************************************ 00:07:23.782 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:23.782 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=120514 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 120514 /var/tmp/spdk.sock 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120514 ']' 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.783 00:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.783 [2024-12-07 00:34:39.891663] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:23.783 [2024-12-07 00:34:39.891758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120514 ] 00:07:24.040 [2024-12-07 00:34:39.957230] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.040 [2024-12-07 00:34:39.957261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.040 [2024-12-07 00:34:40.002013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=120532 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 120532 /var/tmp/spdk2.sock 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120532 ']' 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.297 00:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.297 [2024-12-07 00:34:40.326834] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:24.297 [2024-12-07 00:34:40.326914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120532 ] 00:07:24.297 [2024-12-07 00:34:40.439136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.555 [2024-12-07 00:34:40.538192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.121 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.121 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.121 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 120532 00:07:25.121 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 120532 00:07:25.121 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.380 lslocks: write error 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 120514 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120514 ']' 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 120514 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.380 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120514 00:07:25.638 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.638 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.638 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120514' 00:07:25.638 killing process with pid 120514 00:07:25.638 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 120514 00:07:25.638 00:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 120514 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 120532 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120532 ']' 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 120532 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120532 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120532' 00:07:26.205 killing process with pid 120532 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 120532 00:07:26.205 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 120532 00:07:26.773 00:07:26.773 real 0m2.877s 00:07:26.773 user 0m2.922s 00:07:26.773 sys 0m1.006s 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 ************************************ 00:07:26.773 END TEST locking_app_on_unlocked_coremask 00:07:26.773 ************************************ 00:07:26.773 00:34:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:26.773 00:34:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.773 00:34:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.773 00:34:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 ************************************ 00:07:26.773 START TEST locking_app_on_locked_coremask 00:07:26.773 ************************************ 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=120831 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 120831 /var/tmp/spdk.sock 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120831 ']' 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.773 00:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 [2024-12-07 00:34:42.820848] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:26.773 [2024-12-07 00:34:42.820941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120831 ] 00:07:26.773 [2024-12-07 00:34:42.888132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.032 [2024-12-07 00:34:42.931251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=120887 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 120887 /var/tmp/spdk2.sock 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 120887 /var/tmp/spdk2.sock 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.032 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 120887 /var/tmp/spdk2.sock 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120887 ']' 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.291 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.291 [2024-12-07 00:34:43.235963] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:27.291 [2024-12-07 00:34:43.236083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120887 ] 00:07:27.291 [2024-12-07 00:34:43.335591] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 120831 has claimed it. 00:07:27.291 [2024-12-07 00:34:43.335653] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (120887) - No such process 00:07:27.860 ERROR: process (pid: 120887) is no longer running 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 120831 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 120831 00:07:27.860 00:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.427 lslocks: write error 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 120831 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120831 ']' 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120831 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120831 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120831' 00:07:28.427 killing process with pid 120831 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120831 00:07:28.427 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120831 00:07:28.686 00:07:28.686 real 0m1.965s 00:07:28.686 user 0m2.180s 00:07:28.686 sys 0m0.644s 00:07:28.686 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.686 00:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.686 ************************************ 00:07:28.686 END TEST locking_app_on_locked_coremask 00:07:28.686 ************************************ 00:07:28.686 00:34:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:28.686 00:34:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.686 00:34:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.686 00:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.686 ************************************ 00:07:28.686 START TEST locking_overlapped_coremask 00:07:28.686 ************************************ 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=121129 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 121129 /var/tmp/spdk.sock 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 121129 ']' 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.686 00:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.945 [2024-12-07 00:34:44.840666] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:28.945 [2024-12-07 00:34:44.840760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121129 ] 00:07:28.945 [2024-12-07 00:34:44.907508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.945 [2024-12-07 00:34:44.957977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.945 [2024-12-07 00:34:44.958032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.945 [2024-12-07 00:34:44.958037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=121140 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 121140 /var/tmp/spdk2.sock 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 121140 /var/tmp/spdk2.sock 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 121140 /var/tmp/spdk2.sock 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 121140 ']' 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.202 00:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 [2024-12-07 00:34:45.292272] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:29.202 [2024-12-07 00:34:45.292368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121140 ] 00:07:29.460 [2024-12-07 00:34:45.398237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 121129 has claimed it. 00:07:29.460 [2024-12-07 00:34:45.398307] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:30.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (121140) - No such process 00:07:30.027 ERROR: process (pid: 121140) is no longer running 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 121129 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 121129 ']' 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 121129 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121129 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121129' 00:07:30.027 killing process with pid 121129 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 121129 00:07:30.027 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 121129 00:07:30.594 00:07:30.594 real 0m1.664s 00:07:30.594 user 0m4.729s 00:07:30.594 sys 0m0.493s 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.594 ************************************ 00:07:30.594 END TEST locking_overlapped_coremask 00:07:30.594 ************************************ 00:07:30.594 00:34:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:30.594 00:34:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.594 00:34:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.594 00:34:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.594 ************************************ 00:07:30.594 START TEST locking_overlapped_coremask_via_rpc 00:07:30.594 ************************************ 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=121330 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 121330 /var/tmp/spdk.sock 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 121330 ']' 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.594 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.594 [2024-12-07 00:34:46.558864] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:30.595 [2024-12-07 00:34:46.558955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121330 ] 00:07:30.595 [2024-12-07 00:34:46.627008] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.595 [2024-12-07 00:34:46.627060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.595 [2024-12-07 00:34:46.679253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.595 [2024-12-07 00:34:46.679312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.595 [2024-12-07 00:34:46.679315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=121433 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 121433 /var/tmp/spdk2.sock 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 121433 ']' 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.853 00:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.853 [2024-12-07 00:34:46.997115] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:30.853 [2024-12-07 00:34:46.997198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121433 ] 00:07:31.112 [2024-12-07 00:34:47.101778] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.112 [2024-12-07 00:34:47.101811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.112 [2024-12-07 00:34:47.198629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.112 [2024-12-07 00:34:47.198691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.112 [2024-12-07 00:34:47.198693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.048 [2024-12-07 00:34:47.978093] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 121330 has claimed it. 00:07:32.048 request: 00:07:32.048 { 00:07:32.048 "method": "framework_enable_cpumask_locks", 00:07:32.048 "req_id": 1 00:07:32.048 } 00:07:32.048 Got JSON-RPC error response 00:07:32.048 response: 00:07:32.048 { 00:07:32.048 "code": -32603, 00:07:32.048 "message": "Failed to claim CPU core: 2" 00:07:32.048 } 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 121330 /var/tmp/spdk.sock 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 121330 ']' 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.048 00:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 121433 /var/tmp/spdk2.sock 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 121433 ']' 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.306 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.565 00:07:32.565 real 0m2.023s 00:07:32.565 user 0m1.130s 00:07:32.565 sys 0m0.169s 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.565 00:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 ************************************ 00:07:32.565 END TEST locking_overlapped_coremask_via_rpc 00:07:32.565 ************************************ 00:07:32.565 00:34:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:32.565 00:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 121330 ]] 00:07:32.565 00:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 121330 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 121330 ']' 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 121330 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121330 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121330' 00:07:32.565 killing process with pid 121330 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 121330 00:07:32.565 00:34:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 121330 00:07:33.132 00:34:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 121433 ]] 00:07:33.132 00:34:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 121433 00:07:33.132 00:34:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 121433 ']' 00:07:33.132 00:34:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 121433 00:07:33.132 00:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:33.132 00:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.132 00:34:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121433 00:07:33.132 00:34:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:33.132 00:34:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:33.132 00:34:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121433' 00:07:33.132 killing process with pid 121433 00:07:33.132 00:34:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 121433 00:07:33.132 00:34:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 121433 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 121330 ]] 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 121330 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 121330 ']' 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 121330 00:07:33.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (121330) - No such process 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 121330 is not found' 00:07:33.391 Process with pid 121330 is not found 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 121433 ]] 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 121433 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 121433 ']' 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 121433 00:07:33.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (121433) - No such process 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 121433 is not found' 00:07:33.391 Process with pid 121433 is not found 00:07:33.391 00:34:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.391 00:07:33.391 real 0m15.204s 00:07:33.391 user 0m27.830s 00:07:33.391 sys 0m5.231s 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.391 00:34:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 END TEST cpu_locks 00:07:33.391 ************************************ 00:07:33.391 00:07:33.391 real 0m40.879s 00:07:33.391 user 1m19.795s 00:07:33.391 sys 0m9.367s 00:07:33.391 00:34:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.391 00:34:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 END TEST event 00:07:33.391 ************************************ 00:07:33.391 00:34:49 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.391 00:34:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.391 00:34:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.391 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 START TEST thread 00:07:33.391 ************************************ 00:07:33.391 00:34:49 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.391 * Looking for test storage... 00:07:33.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.651 00:34:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.651 00:34:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.651 00:34:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.651 00:34:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.651 00:34:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.651 00:34:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.651 00:34:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.651 00:34:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.651 00:34:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.651 00:34:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.651 00:34:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.651 00:34:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:33.651 00:34:49 thread -- scripts/common.sh@345 -- # : 1 00:07:33.651 00:34:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.651 00:34:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.651 00:34:49 thread -- scripts/common.sh@365 -- # decimal 1 00:07:33.651 00:34:49 thread -- scripts/common.sh@353 -- # local d=1 00:07:33.651 00:34:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.651 00:34:49 thread -- scripts/common.sh@355 -- # echo 1 00:07:33.651 00:34:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.651 00:34:49 thread -- scripts/common.sh@366 -- # decimal 2 00:07:33.651 00:34:49 thread -- scripts/common.sh@353 -- # local d=2 00:07:33.651 00:34:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.651 00:34:49 thread -- scripts/common.sh@355 -- # echo 2 00:07:33.651 00:34:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.651 00:34:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.651 00:34:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.651 00:34:49 thread -- scripts/common.sh@368 -- # return 0 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.651 --rc genhtml_branch_coverage=1 00:07:33.651 --rc genhtml_function_coverage=1 00:07:33.651 --rc genhtml_legend=1 00:07:33.651 --rc geninfo_all_blocks=1 00:07:33.651 --rc geninfo_unexecuted_blocks=1 00:07:33.651 00:07:33.651 ' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.651 --rc genhtml_branch_coverage=1 00:07:33.651 --rc genhtml_function_coverage=1 00:07:33.651 --rc genhtml_legend=1 00:07:33.651 --rc geninfo_all_blocks=1 00:07:33.651 --rc geninfo_unexecuted_blocks=1 00:07:33.651 00:07:33.651 ' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.651 --rc genhtml_branch_coverage=1 00:07:33.651 --rc genhtml_function_coverage=1 00:07:33.651 --rc genhtml_legend=1 00:07:33.651 --rc geninfo_all_blocks=1 00:07:33.651 --rc geninfo_unexecuted_blocks=1 00:07:33.651 00:07:33.651 ' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.651 --rc genhtml_branch_coverage=1 00:07:33.651 --rc genhtml_function_coverage=1 00:07:33.651 --rc genhtml_legend=1 00:07:33.651 --rc geninfo_all_blocks=1 00:07:33.651 --rc geninfo_unexecuted_blocks=1 00:07:33.651 00:07:33.651 ' 00:07:33.651 00:34:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:33.651 00:34:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.652 00:34:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.652 ************************************ 00:07:33.652 START TEST thread_poller_perf 00:07:33.652 ************************************ 00:07:33.652 00:34:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.652 [2024-12-07 00:34:49.676896] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:33.652 [2024-12-07 00:34:49.676964] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121811 ] 00:07:33.652 [2024-12-07 00:34:49.745354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.652 [2024-12-07 00:34:49.793093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.652 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:35.028 [2024-12-06T23:34:51.179Z] ====================================== 00:07:35.028 [2024-12-06T23:34:51.179Z] busy:2710956048 (cyc) 00:07:35.028 [2024-12-06T23:34:51.179Z] total_run_count: 353000 00:07:35.028 [2024-12-06T23:34:51.179Z] tsc_hz: 2700000000 (cyc) 00:07:35.028 [2024-12-06T23:34:51.179Z] ====================================== 00:07:35.028 [2024-12-06T23:34:51.179Z] poller_cost: 7679 (cyc), 2844 (nsec) 00:07:35.028 00:07:35.028 real 0m1.181s 00:07:35.028 user 0m1.107s 00:07:35.028 sys 0m0.069s 00:07:35.028 00:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.028 00:34:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.029 ************************************ 00:07:35.029 END TEST thread_poller_perf 00:07:35.029 ************************************ 00:07:35.029 00:34:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.029 00:34:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:35.029 00:34:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.029 00:34:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.029 ************************************ 00:07:35.029 START TEST thread_poller_perf 00:07:35.029 ************************************ 00:07:35.029 00:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.029 [2024-12-07 00:34:50.911772] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:35.029 [2024-12-07 00:34:50.911838] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121963 ] 00:07:35.029 [2024-12-07 00:34:50.978505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.029 [2024-12-07 00:34:51.026590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.029 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:35.967 [2024-12-06T23:34:52.118Z] ====================================== 00:07:35.967 [2024-12-06T23:34:52.118Z] busy:2702387622 (cyc) 00:07:35.967 [2024-12-06T23:34:52.118Z] total_run_count: 4432000 00:07:35.967 [2024-12-06T23:34:52.118Z] tsc_hz: 2700000000 (cyc) 00:07:35.967 [2024-12-06T23:34:52.118Z] ====================================== 00:07:35.967 [2024-12-06T23:34:52.118Z] poller_cost: 609 (cyc), 225 (nsec) 00:07:35.967 00:07:35.967 real 0m1.174s 00:07:35.967 user 0m1.104s 00:07:35.967 sys 0m0.065s 00:07:35.967 00:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.967 00:34:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.967 ************************************ 00:07:35.967 END TEST thread_poller_perf 00:07:35.967 ************************************ 00:07:35.967 00:34:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:35.967 00:07:35.967 real 0m2.604s 00:07:35.967 user 0m2.352s 00:07:35.967 sys 0m0.258s 00:07:35.967 00:34:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.967 00:34:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.967 ************************************ 00:07:35.967 END TEST thread 00:07:35.967 ************************************ 00:07:36.227 00:34:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:36.227 00:34:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.227 00:34:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.227 00:34:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.227 00:34:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.227 ************************************ 00:07:36.227 START TEST app_cmdline 00:07:36.227 ************************************ 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.227 * Looking for test storage... 00:07:36.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.227 00:34:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.227 --rc genhtml_branch_coverage=1 00:07:36.227 --rc genhtml_function_coverage=1 00:07:36.227 --rc genhtml_legend=1 00:07:36.227 --rc geninfo_all_blocks=1 00:07:36.227 --rc geninfo_unexecuted_blocks=1 00:07:36.227 00:07:36.227 ' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.227 --rc genhtml_branch_coverage=1 00:07:36.227 --rc genhtml_function_coverage=1 00:07:36.227 --rc genhtml_legend=1 00:07:36.227 --rc geninfo_all_blocks=1 00:07:36.227 --rc geninfo_unexecuted_blocks=1 00:07:36.227 00:07:36.227 ' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.227 --rc genhtml_branch_coverage=1 00:07:36.227 --rc genhtml_function_coverage=1 00:07:36.227 --rc genhtml_legend=1 00:07:36.227 --rc geninfo_all_blocks=1 00:07:36.227 --rc geninfo_unexecuted_blocks=1 00:07:36.227 00:07:36.227 ' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.227 --rc genhtml_branch_coverage=1 00:07:36.227 --rc genhtml_function_coverage=1 00:07:36.227 --rc genhtml_legend=1 00:07:36.227 --rc geninfo_all_blocks=1 00:07:36.227 --rc geninfo_unexecuted_blocks=1 00:07:36.227 00:07:36.227 ' 00:07:36.227 00:34:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.227 00:34:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=122283 00:07:36.227 00:34:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.227 00:34:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 122283 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 122283 ']' 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.227 00:34:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.227 [2024-12-07 00:34:52.349100] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:36.228 [2024-12-07 00:34:52.349191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122283 ] 00:07:36.487 [2024-12-07 00:34:52.419429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.487 [2024-12-07 00:34:52.466903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.745 00:34:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.745 00:34:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:36.745 00:34:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.004 { 00:07:37.004 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:07:37.004 "fields": { 00:07:37.004 "major": 25, 00:07:37.004 "minor": 1, 00:07:37.004 "patch": 0, 00:07:37.004 "suffix": "-pre", 00:07:37.004 "commit": "a2f5e1c2d" 00:07:37.004 } 00:07:37.004 } 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.004 00:34:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.004 00:34:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.004 00:34:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:37.004 00:34:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.004 00:34:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.004 00:34:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.004 00:34:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.004 00:34:53 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.262 request: 00:07:37.262 { 00:07:37.262 "method": "env_dpdk_get_mem_stats", 00:07:37.262 "req_id": 1 00:07:37.262 } 00:07:37.262 Got JSON-RPC error response 00:07:37.262 response: 00:07:37.262 { 00:07:37.262 "code": -32601, 00:07:37.262 "message": "Method not found" 00:07:37.262 } 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.262 00:34:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 122283 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 122283 ']' 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 122283 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122283 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122283' 00:07:37.262 killing process with pid 122283 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 122283 00:07:37.262 00:34:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 122283 00:07:37.834 00:07:37.834 real 0m1.551s 00:07:37.834 user 0m1.903s 00:07:37.834 sys 0m0.490s 00:07:37.834 00:34:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.834 00:34:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.834 ************************************ 00:07:37.834 END TEST app_cmdline 00:07:37.834 ************************************ 00:07:37.834 00:34:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:37.834 00:34:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.834 00:34:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.834 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:37.834 ************************************ 00:07:37.834 START TEST version 00:07:37.834 ************************************ 00:07:37.834 00:34:53 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:37.834 * Looking for test storage... 00:07:37.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.834 00:34:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:37.834 00:34:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:37.834 00:34:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:37.834 00:34:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:37.834 00:34:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.834 00:34:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.835 00:34:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.835 00:34:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.835 00:34:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.835 00:34:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.835 00:34:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.835 00:34:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.835 00:34:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.835 00:34:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.835 00:34:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.835 00:34:53 version -- scripts/common.sh@344 -- # case "$op" in 00:07:37.835 00:34:53 version -- scripts/common.sh@345 -- # : 1 00:07:37.835 00:34:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.835 00:34:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.835 00:34:53 version -- scripts/common.sh@365 -- # decimal 1 00:07:37.835 00:34:53 version -- scripts/common.sh@353 -- # local d=1 00:07:37.835 00:34:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.835 00:34:53 version -- scripts/common.sh@355 -- # echo 1 00:07:37.835 00:34:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.835 00:34:53 version -- scripts/common.sh@366 -- # decimal 2 00:07:37.835 00:34:53 version -- scripts/common.sh@353 -- # local d=2 00:07:37.835 00:34:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.835 00:34:53 version -- scripts/common.sh@355 -- # echo 2 00:07:37.835 00:34:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.835 00:34:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.835 00:34:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.835 00:34:53 version -- scripts/common.sh@368 -- # return 0 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:37.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.835 --rc genhtml_branch_coverage=1 00:07:37.835 --rc genhtml_function_coverage=1 00:07:37.835 --rc genhtml_legend=1 00:07:37.835 --rc geninfo_all_blocks=1 00:07:37.835 --rc geninfo_unexecuted_blocks=1 00:07:37.835 00:07:37.835 ' 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:37.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.835 --rc genhtml_branch_coverage=1 00:07:37.835 --rc genhtml_function_coverage=1 00:07:37.835 --rc genhtml_legend=1 00:07:37.835 --rc geninfo_all_blocks=1 00:07:37.835 --rc geninfo_unexecuted_blocks=1 00:07:37.835 00:07:37.835 ' 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:37.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.835 --rc genhtml_branch_coverage=1 00:07:37.835 --rc genhtml_function_coverage=1 00:07:37.835 --rc genhtml_legend=1 00:07:37.835 --rc geninfo_all_blocks=1 00:07:37.835 --rc geninfo_unexecuted_blocks=1 00:07:37.835 00:07:37.835 ' 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:37.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.835 --rc genhtml_branch_coverage=1 00:07:37.835 --rc genhtml_function_coverage=1 00:07:37.835 --rc genhtml_legend=1 00:07:37.835 --rc geninfo_all_blocks=1 00:07:37.835 --rc geninfo_unexecuted_blocks=1 00:07:37.835 00:07:37.835 ' 00:07:37.835 00:34:53 version -- app/version.sh@17 -- # get_header_version major 00:07:37.835 00:34:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # cut -f2 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.835 00:34:53 version -- app/version.sh@17 -- # major=25 00:07:37.835 00:34:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:37.835 00:34:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # cut -f2 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.835 00:34:53 version -- app/version.sh@18 -- # minor=1 00:07:37.835 00:34:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:37.835 00:34:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # cut -f2 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.835 00:34:53 version -- app/version.sh@19 -- # patch=0 00:07:37.835 00:34:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:37.835 00:34:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # cut -f2 00:07:37.835 00:34:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.835 00:34:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:37.835 00:34:53 version -- app/version.sh@22 -- # version=25.1 00:07:37.835 00:34:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:37.835 00:34:53 version -- app/version.sh@28 -- # version=25.1rc0 00:07:37.835 00:34:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.835 00:34:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:37.835 00:34:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:37.835 00:34:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:37.835 00:07:37.835 real 0m0.200s 00:07:37.835 user 0m0.133s 00:07:37.835 sys 0m0.094s 00:07:37.835 00:34:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.835 00:34:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:37.835 ************************************ 00:07:37.835 END TEST version 00:07:37.835 ************************************ 00:07:37.835 00:34:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:37.835 00:34:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:37.835 00:34:53 -- spdk/autotest.sh@194 -- # uname -s 00:07:37.835 00:34:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:37.835 00:34:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:37.835 00:34:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:37.835 00:34:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:37.835 00:34:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:37.835 00:34:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:37.835 00:34:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.835 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:07:38.094 00:34:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:38.094 00:34:54 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:38.094 00:34:54 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:38.094 00:34:54 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:38.094 00:34:54 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:38.094 00:34:54 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:38.094 00:34:54 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.094 00:34:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.094 00:34:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.094 00:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.094 ************************************ 00:07:38.094 START TEST nvmf_tcp 00:07:38.094 ************************************ 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.094 * Looking for test storage... 00:07:38.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.094 00:34:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.094 --rc genhtml_branch_coverage=1 00:07:38.094 --rc genhtml_function_coverage=1 00:07:38.094 --rc genhtml_legend=1 00:07:38.094 --rc geninfo_all_blocks=1 00:07:38.094 --rc geninfo_unexecuted_blocks=1 00:07:38.094 00:07:38.094 ' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.094 --rc genhtml_branch_coverage=1 00:07:38.094 --rc genhtml_function_coverage=1 00:07:38.094 --rc genhtml_legend=1 00:07:38.094 --rc geninfo_all_blocks=1 00:07:38.094 --rc geninfo_unexecuted_blocks=1 00:07:38.094 00:07:38.094 ' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.094 --rc genhtml_branch_coverage=1 00:07:38.094 --rc genhtml_function_coverage=1 00:07:38.094 --rc genhtml_legend=1 00:07:38.094 --rc geninfo_all_blocks=1 00:07:38.094 --rc geninfo_unexecuted_blocks=1 00:07:38.094 00:07:38.094 ' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.094 --rc genhtml_branch_coverage=1 00:07:38.094 --rc genhtml_function_coverage=1 00:07:38.094 --rc genhtml_legend=1 00:07:38.094 --rc geninfo_all_blocks=1 00:07:38.094 --rc geninfo_unexecuted_blocks=1 00:07:38.094 00:07:38.094 ' 00:07:38.094 00:34:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:38.094 00:34:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.094 00:34:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.094 00:34:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.095 00:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.095 ************************************ 00:07:38.095 START TEST nvmf_target_core 00:07:38.095 ************************************ 00:07:38.095 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:38.354 * Looking for test storage... 00:07:38.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.354 --rc genhtml_branch_coverage=1 00:07:38.354 --rc genhtml_function_coverage=1 00:07:38.354 --rc genhtml_legend=1 00:07:38.354 --rc geninfo_all_blocks=1 00:07:38.354 --rc geninfo_unexecuted_blocks=1 00:07:38.354 00:07:38.354 ' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.354 --rc genhtml_branch_coverage=1 00:07:38.354 --rc genhtml_function_coverage=1 00:07:38.354 --rc genhtml_legend=1 00:07:38.354 --rc geninfo_all_blocks=1 00:07:38.354 --rc geninfo_unexecuted_blocks=1 00:07:38.354 00:07:38.354 ' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.354 --rc genhtml_branch_coverage=1 00:07:38.354 --rc genhtml_function_coverage=1 00:07:38.354 --rc genhtml_legend=1 00:07:38.354 --rc geninfo_all_blocks=1 00:07:38.354 --rc geninfo_unexecuted_blocks=1 00:07:38.354 00:07:38.354 ' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.354 --rc genhtml_branch_coverage=1 00:07:38.354 --rc genhtml_function_coverage=1 00:07:38.354 --rc genhtml_legend=1 00:07:38.354 --rc geninfo_all_blocks=1 00:07:38.354 --rc geninfo_unexecuted_blocks=1 00:07:38.354 00:07:38.354 ' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:38.354 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.355 ************************************ 00:07:38.355 START TEST nvmf_abort 00:07:38.355 ************************************ 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:38.355 * Looking for test storage... 00:07:38.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.355 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.615 --rc genhtml_branch_coverage=1 00:07:38.615 --rc genhtml_function_coverage=1 00:07:38.615 --rc genhtml_legend=1 00:07:38.615 --rc geninfo_all_blocks=1 00:07:38.615 --rc geninfo_unexecuted_blocks=1 00:07:38.615 00:07:38.615 ' 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.615 --rc genhtml_branch_coverage=1 00:07:38.615 --rc genhtml_function_coverage=1 00:07:38.615 --rc genhtml_legend=1 00:07:38.615 --rc geninfo_all_blocks=1 00:07:38.615 --rc geninfo_unexecuted_blocks=1 00:07:38.615 00:07:38.615 ' 00:07:38.615 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.615 --rc genhtml_branch_coverage=1 00:07:38.615 --rc genhtml_function_coverage=1 00:07:38.616 --rc genhtml_legend=1 00:07:38.616 --rc geninfo_all_blocks=1 00:07:38.616 --rc geninfo_unexecuted_blocks=1 00:07:38.616 00:07:38.616 ' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.616 --rc genhtml_branch_coverage=1 00:07:38.616 --rc genhtml_function_coverage=1 00:07:38.616 --rc genhtml_legend=1 00:07:38.616 --rc geninfo_all_blocks=1 00:07:38.616 --rc geninfo_unexecuted_blocks=1 00:07:38.616 00:07:38.616 ' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.616 00:34:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.523 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.524 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.524 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.524 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:07:40.782 00:07:40.782 --- 10.0.0.2 ping statistics --- 00:07:40.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.782 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:07:40.782 00:07:40.782 --- 10.0.0.1 ping statistics --- 00:07:40.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.782 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=124368 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 124368 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 124368 ']' 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.782 00:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.782 [2024-12-07 00:34:56.909794] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:40.782 [2024-12-07 00:34:56.909887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.041 [2024-12-07 00:34:56.984937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.041 [2024-12-07 00:34:57.033611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.041 [2024-12-07 00:34:57.033661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.041 [2024-12-07 00:34:57.033689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.041 [2024-12-07 00:34:57.033701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.041 [2024-12-07 00:34:57.033710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.041 [2024-12-07 00:34:57.035057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.041 [2024-12-07 00:34:57.035120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.041 [2024-12-07 00:34:57.035117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.041 [2024-12-07 00:34:57.183516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.041 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 Malloc0 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 Delay0 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 [2024-12-07 00:34:57.258136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.298 00:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:41.298 [2024-12-07 00:34:57.373900] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:43.827 Initializing NVMe Controllers 00:07:43.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:43.827 controller IO queue size 128 less than required 00:07:43.827 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:43.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:43.827 Initialization complete. Launching workers. 00:07:43.827 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28200 00:07:43.827 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28261, failed to submit 62 00:07:43.827 success 28204, unsuccessful 57, failed 0 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.827 rmmod nvme_tcp 00:07:43.827 rmmod nvme_fabrics 00:07:43.827 rmmod nvme_keyring 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 124368 ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 124368 ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124368' 00:07:43.827 killing process with pid 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 124368 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.827 00:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.734 00:07:45.734 real 0m7.412s 00:07:45.734 user 0m10.846s 00:07:45.734 sys 0m2.400s 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:45.734 ************************************ 00:07:45.734 END TEST nvmf_abort 00:07:45.734 ************************************ 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.734 ************************************ 00:07:45.734 START TEST nvmf_ns_hotplug_stress 00:07:45.734 ************************************ 00:07:45.734 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:45.995 * Looking for test storage... 00:07:45.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.995 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.995 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.995 00:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.995 --rc genhtml_branch_coverage=1 00:07:45.995 --rc genhtml_function_coverage=1 00:07:45.995 --rc genhtml_legend=1 00:07:45.995 --rc geninfo_all_blocks=1 00:07:45.995 --rc geninfo_unexecuted_blocks=1 00:07:45.995 00:07:45.995 ' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.995 --rc genhtml_branch_coverage=1 00:07:45.995 --rc genhtml_function_coverage=1 00:07:45.995 --rc genhtml_legend=1 00:07:45.995 --rc geninfo_all_blocks=1 00:07:45.995 --rc geninfo_unexecuted_blocks=1 00:07:45.995 00:07:45.995 ' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.995 --rc genhtml_branch_coverage=1 00:07:45.995 --rc genhtml_function_coverage=1 00:07:45.995 --rc genhtml_legend=1 00:07:45.995 --rc geninfo_all_blocks=1 00:07:45.995 --rc geninfo_unexecuted_blocks=1 00:07:45.995 00:07:45.995 ' 00:07:45.995 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.995 --rc genhtml_branch_coverage=1 00:07:45.995 --rc genhtml_function_coverage=1 00:07:45.995 --rc genhtml_legend=1 00:07:45.995 --rc geninfo_all_blocks=1 00:07:45.995 --rc geninfo_unexecuted_blocks=1 00:07:45.995 00:07:45.996 ' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.996 00:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:48.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:48.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:48.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:48.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.534 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:07:48.535 00:07:48.535 --- 10.0.0.2 ping statistics --- 00:07:48.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.535 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:48.535 00:07:48.535 --- 10.0.0.1 ping statistics --- 00:07:48.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.535 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126617 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126617 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126617 ']' 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.535 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.535 [2024-12-07 00:35:04.514943] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:48.535 [2024-12-07 00:35:04.515062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.535 [2024-12-07 00:35:04.588175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.535 [2024-12-07 00:35:04.637125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.535 [2024-12-07 00:35:04.637179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.535 [2024-12-07 00:35:04.637209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.535 [2024-12-07 00:35:04.637221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.535 [2024-12-07 00:35:04.637231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.535 [2024-12-07 00:35:04.638727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.535 [2024-12-07 00:35:04.638791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.535 [2024-12-07 00:35:04.638795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:48.793 00:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.052 [2024-12-07 00:35:05.017253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.053 00:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:49.310 00:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.566 [2024-12-07 00:35:05.555809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.566 00:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.823 00:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:50.080 Malloc0 00:07:50.080 00:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.338 Delay0 00:07:50.338 00:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.596 00:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:50.855 NULL1 00:07:50.855 00:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:51.113 00:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=127031 00:07:51.113 00:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:51.113 00:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:51.113 00:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.490 Read completed with error (sct=0, sc=11) 00:07:52.490 00:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.748 00:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:52.748 00:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:53.006 true 00:07:53.006 00:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:53.006 00:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.941 00:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.941 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:53.941 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:54.199 true 00:07:54.199 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:54.199 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.457 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.715 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:54.716 00:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:54.975 true 00:07:54.975 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:54.975 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.233 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.798 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:55.799 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:55.799 true 00:07:55.799 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:55.799 00:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.172 00:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.172 00:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:57.172 00:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:57.429 true 00:07:57.429 00:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:57.429 00:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.687 00:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.944 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:57.944 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:58.202 true 00:07:58.202 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:58.202 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.461 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.719 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:58.719 00:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:58.978 true 00:07:58.978 00:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:07:58.978 00:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.348 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.348 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:00.348 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:00.605 true 00:08:00.605 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:00.605 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.862 00:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.120 00:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:01.120 00:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:01.378 true 00:08:01.378 00:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:01.378 00:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.635 00:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.893 00:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:01.893 00:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:02.151 true 00:08:02.151 00:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:02.151 00:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.084 00:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.599 00:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:03.599 00:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:03.599 true 00:08:03.856 00:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:03.856 00:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.114 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.372 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:04.372 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:04.630 true 00:08:04.630 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:04.630 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.888 00:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.144 00:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:05.144 00:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:05.401 true 00:08:05.401 00:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:05.401 00:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.332 00:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.589 00:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:06.589 00:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:06.846 true 00:08:06.846 00:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:06.846 00:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.104 00:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.362 00:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:07.362 00:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:07.620 true 00:08:07.620 00:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:07.620 00:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.878 00:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.136 00:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:08.136 00:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:08.394 true 00:08:08.652 00:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:08.652 00:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.584 00:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.841 00:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:09.841 00:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:10.098 true 00:08:10.098 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:10.098 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.356 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.614 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:10.614 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:10.873 true 00:08:10.873 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:10.873 00:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.131 00:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.389 00:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:11.389 00:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:11.647 true 00:08:11.647 00:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:11.647 00:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.582 00:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.840 00:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:12.840 00:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:13.097 true 00:08:13.098 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:13.098 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.356 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.614 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:13.614 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:13.872 true 00:08:13.872 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:13.872 00:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.807 00:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.064 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:15.064 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:15.323 true 00:08:15.323 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:15.323 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.581 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.839 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:15.839 00:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:16.097 true 00:08:16.097 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:16.097 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.354 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.612 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:16.612 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:16.870 true 00:08:16.870 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:16.870 00:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.801 00:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.058 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:18.058 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:18.316 true 00:08:18.316 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:18.316 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.573 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.830 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:18.830 00:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:19.087 true 00:08:19.087 00:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:19.087 00:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.018 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.275 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:20.275 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:20.532 true 00:08:20.532 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:20.532 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.789 00:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.046 00:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:21.046 00:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:21.304 true 00:08:21.560 00:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:21.560 00:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.125 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.384 Initializing NVMe Controllers 00:08:22.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.384 Controller IO queue size 128, less than required. 00:08:22.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.384 Controller IO queue size 128, less than required. 00:08:22.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:22.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:22.384 Initialization complete. Launching workers. 00:08:22.384 ======================================================== 00:08:22.384 Latency(us) 00:08:22.384 Device Information : IOPS MiB/s Average min max 00:08:22.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 641.87 0.31 88902.95 3310.81 1013978.70 00:08:22.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8789.83 4.29 14562.19 3399.70 448904.33 00:08:22.384 ======================================================== 00:08:22.384 Total : 9431.70 4.61 19621.39 3310.81 1013978.70 00:08:22.384 00:08:22.384 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:22.384 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:22.641 true 00:08:22.641 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127031 00:08:22.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (127031) - No such process 00:08:22.641 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 127031 00:08:22.641 00:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.898 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.156 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:23.156 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:23.414 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:23.414 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.414 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:23.672 null0 00:08:23.672 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:23.672 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.672 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:23.672 null1 00:08:23.930 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:23.930 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:23.930 00:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:24.188 null2 00:08:24.188 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.188 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.188 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:24.446 null3 00:08:24.446 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.446 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.446 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:24.703 null4 00:08:24.703 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.703 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.703 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:24.961 null5 00:08:24.961 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:24.961 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:24.961 00:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:25.219 null6 00:08:25.219 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:25.219 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.219 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:25.478 null7 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.478 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 131113 131114 131116 131118 131120 131122 131124 131126 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.479 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.737 00:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.996 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.255 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.255 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.256 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.514 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.514 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.515 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.774 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.033 00:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.292 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.551 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.812 00:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.071 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.329 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.588 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.847 00:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.114 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.115 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.377 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.634 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.892 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.892 00:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.892 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.150 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.409 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.667 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.925 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.926 00:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.184 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.184 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.185 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.444 rmmod nvme_tcp 00:08:31.444 rmmod nvme_fabrics 00:08:31.444 rmmod nvme_keyring 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126617 ']' 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126617 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126617 ']' 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126617 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.444 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126617 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126617' 00:08:31.705 killing process with pid 126617 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126617 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126617 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.705 00:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.253 00:08:34.253 real 0m48.001s 00:08:34.253 user 3m42.656s 00:08:34.253 sys 0m16.193s 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.253 ************************************ 00:08:34.253 END TEST nvmf_ns_hotplug_stress 00:08:34.253 ************************************ 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.253 ************************************ 00:08:34.253 START TEST nvmf_delete_subsystem 00:08:34.253 ************************************ 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:34.253 * Looking for test storage... 00:08:34.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.253 00:35:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.253 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.253 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.253 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.253 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.253 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.254 --rc genhtml_branch_coverage=1 00:08:34.254 --rc genhtml_function_coverage=1 00:08:34.254 --rc genhtml_legend=1 00:08:34.254 --rc geninfo_all_blocks=1 00:08:34.254 --rc geninfo_unexecuted_blocks=1 00:08:34.254 00:08:34.254 ' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.254 --rc genhtml_branch_coverage=1 00:08:34.254 --rc genhtml_function_coverage=1 00:08:34.254 --rc genhtml_legend=1 00:08:34.254 --rc geninfo_all_blocks=1 00:08:34.254 --rc geninfo_unexecuted_blocks=1 00:08:34.254 00:08:34.254 ' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.254 --rc genhtml_branch_coverage=1 00:08:34.254 --rc genhtml_function_coverage=1 00:08:34.254 --rc genhtml_legend=1 00:08:34.254 --rc geninfo_all_blocks=1 00:08:34.254 --rc geninfo_unexecuted_blocks=1 00:08:34.254 00:08:34.254 ' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.254 --rc genhtml_branch_coverage=1 00:08:34.254 --rc genhtml_function_coverage=1 00:08:34.254 --rc genhtml_legend=1 00:08:34.254 --rc geninfo_all_blocks=1 00:08:34.254 --rc geninfo_unexecuted_blocks=1 00:08:34.254 00:08:34.254 ' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.254 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.255 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.255 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.255 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.255 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.255 00:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.164 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.165 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:08:36.425 00:08:36.425 --- 10.0.0.2 ping statistics --- 00:08:36.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.425 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:36.425 00:08:36.425 --- 10.0.0.1 ping statistics --- 00:08:36.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.425 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=134017 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 134017 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 134017 ']' 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.425 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.425 [2024-12-07 00:35:52.402709] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:36.425 [2024-12-07 00:35:52.402803] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.425 [2024-12-07 00:35:52.479453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.425 [2024-12-07 00:35:52.527437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.425 [2024-12-07 00:35:52.527492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.425 [2024-12-07 00:35:52.527521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.425 [2024-12-07 00:35:52.527532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.425 [2024-12-07 00:35:52.527542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.425 [2024-12-07 00:35:52.528948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.425 [2024-12-07 00:35:52.528953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 [2024-12-07 00:35:52.680117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 [2024-12-07 00:35:52.696364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 NULL1 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 Delay0 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=134039 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:36.685 00:35:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:36.685 [2024-12-07 00:35:52.781091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:38.584 00:35:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.584 00:35:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.584 00:35:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 [2024-12-07 00:35:54.904016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf150 is same with the state(6) to be set 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Write completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 starting I/O failed: -6 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.843 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 starting I/O failed: -6 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 [2024-12-07 00:35:54.904702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e84000c40 is same with the state(6) to be set 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Read completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:38.844 Write completed with error (sct=0, sc=8) 00:08:39.790 [2024-12-07 00:35:55.876429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbd190 is same with the state(6) to be set 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 [2024-12-07 00:35:55.905508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbef70 is same with the state(6) to be set 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Read completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.790 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 [2024-12-07 00:35:55.905760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbf330 is same with the state(6) to be set 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 [2024-12-07 00:35:55.906185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e8400d7e0 is same with the state(6) to be set 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Write completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 Read completed with error (sct=0, sc=8) 00:08:39.791 [2024-12-07 00:35:55.906366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e8400d020 is same with the state(6) to be set 00:08:39.791 Initializing NVMe Controllers 00:08:39.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.791 Controller IO queue size 128, less than required. 00:08:39.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:39.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:39.791 Initialization complete. Launching workers. 00:08:39.791 ======================================================== 00:08:39.791 Latency(us) 00:08:39.791 Device Information : IOPS MiB/s Average min max 00:08:39.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.17 0.08 892602.98 574.63 1012572.69 00:08:39.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.69 0.08 897158.28 375.16 1013069.76 00:08:39.791 ======================================================== 00:08:39.791 Total : 339.86 0.17 894864.01 375.16 1013069.76 00:08:39.791 00:08:39.791 [2024-12-07 00:35:55.907178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbd190 (9): Bad file descriptor 00:08:39.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:39.791 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.791 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:39.791 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 134039 00:08:39.791 00:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 134039 00:08:40.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (134039) - No such process 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 134039 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 134039 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 134039 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.357 [2024-12-07 00:35:56.431373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.357 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=134567 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.358 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.358 [2024-12-07 00:35:56.503789] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:40.923 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.923 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:40.923 00:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.489 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.489 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:41.489 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.087 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.087 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:42.087 00:35:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.344 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.344 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:42.344 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.908 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.908 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:42.908 00:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.473 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.473 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:43.473 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.732 Initializing NVMe Controllers 00:08:43.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.732 Controller IO queue size 128, less than required. 00:08:43.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.732 Initialization complete. Launching workers. 00:08:43.732 ======================================================== 00:08:43.732 Latency(us) 00:08:43.732 Device Information : IOPS MiB/s Average min max 00:08:43.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004053.44 1000176.56 1011249.22 00:08:43.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004919.70 1000183.35 1041670.32 00:08:43.732 ======================================================== 00:08:43.732 Total : 256.00 0.12 1004486.57 1000176.56 1041670.32 00:08:43.732 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134567 00:08:43.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (134567) - No such process 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 134567 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.990 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:43.991 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.991 00:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.991 rmmod nvme_tcp 00:08:43.991 rmmod nvme_fabrics 00:08:43.991 rmmod nvme_keyring 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 134017 ']' 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 134017 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 134017 ']' 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 134017 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134017 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134017' 00:08:43.991 killing process with pid 134017 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 134017 00:08:43.991 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 134017 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.251 00:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.159 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.159 00:08:46.159 real 0m12.381s 00:08:46.159 user 0m27.969s 00:08:46.159 sys 0m3.012s 00:08:46.418 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.418 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.418 ************************************ 00:08:46.418 END TEST nvmf_delete_subsystem 00:08:46.418 ************************************ 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.419 ************************************ 00:08:46.419 START TEST nvmf_host_management 00:08:46.419 ************************************ 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:46.419 * Looking for test storage... 00:08:46.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.419 --rc genhtml_branch_coverage=1 00:08:46.419 --rc genhtml_function_coverage=1 00:08:46.419 --rc genhtml_legend=1 00:08:46.419 --rc geninfo_all_blocks=1 00:08:46.419 --rc geninfo_unexecuted_blocks=1 00:08:46.419 00:08:46.419 ' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.419 --rc genhtml_branch_coverage=1 00:08:46.419 --rc genhtml_function_coverage=1 00:08:46.419 --rc genhtml_legend=1 00:08:46.419 --rc geninfo_all_blocks=1 00:08:46.419 --rc geninfo_unexecuted_blocks=1 00:08:46.419 00:08:46.419 ' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.419 --rc genhtml_branch_coverage=1 00:08:46.419 --rc genhtml_function_coverage=1 00:08:46.419 --rc genhtml_legend=1 00:08:46.419 --rc geninfo_all_blocks=1 00:08:46.419 --rc geninfo_unexecuted_blocks=1 00:08:46.419 00:08:46.419 ' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.419 --rc genhtml_branch_coverage=1 00:08:46.419 --rc genhtml_function_coverage=1 00:08:46.419 --rc genhtml_legend=1 00:08:46.419 --rc geninfo_all_blocks=1 00:08:46.419 --rc geninfo_unexecuted_blocks=1 00:08:46.419 00:08:46.419 ' 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.419 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.420 00:36:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.965 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:08:48.966 00:08:48.966 --- 10.0.0.2 ping statistics --- 00:08:48.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.966 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:48.966 00:08:48.966 --- 10.0.0.1 ping statistics --- 00:08:48.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.966 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.966 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=137037 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 137037 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 137037 ']' 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.967 00:36:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.967 [2024-12-07 00:36:04.931868] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:48.967 [2024-12-07 00:36:04.931961] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.967 [2024-12-07 00:36:05.009203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.967 [2024-12-07 00:36:05.060606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.967 [2024-12-07 00:36:05.060662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.967 [2024-12-07 00:36:05.060691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.967 [2024-12-07 00:36:05.060703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.967 [2024-12-07 00:36:05.060712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.967 [2024-12-07 00:36:05.062379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.967 [2024-12-07 00:36:05.062437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.967 [2024-12-07 00:36:05.062504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.967 [2024-12-07 00:36:05.062506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.224 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.225 [2024-12-07 00:36:05.213149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.225 Malloc0 00:08:49.225 [2024-12-07 00:36:05.284858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=137135 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 137135 /var/tmp/bdevperf.sock 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 137135 ']' 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.225 { 00:08:49.225 "params": { 00:08:49.225 "name": "Nvme$subsystem", 00:08:49.225 "trtype": "$TEST_TRANSPORT", 00:08:49.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.225 "adrfam": "ipv4", 00:08:49.225 "trsvcid": "$NVMF_PORT", 00:08:49.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.225 "hdgst": ${hdgst:-false}, 00:08:49.225 "ddgst": ${ddgst:-false} 00:08:49.225 }, 00:08:49.225 "method": "bdev_nvme_attach_controller" 00:08:49.225 } 00:08:49.225 EOF 00:08:49.225 )") 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:49.225 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.225 "params": { 00:08:49.225 "name": "Nvme0", 00:08:49.225 "trtype": "tcp", 00:08:49.225 "traddr": "10.0.0.2", 00:08:49.225 "adrfam": "ipv4", 00:08:49.225 "trsvcid": "4420", 00:08:49.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:49.225 "hdgst": false, 00:08:49.225 "ddgst": false 00:08:49.225 }, 00:08:49.225 "method": "bdev_nvme_attach_controller" 00:08:49.225 }' 00:08:49.225 [2024-12-07 00:36:05.365315] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:49.225 [2024-12-07 00:36:05.365412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137135 ] 00:08:49.483 [2024-12-07 00:36:05.438644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.483 [2024-12-07 00:36:05.485861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.740 Running I/O for 10 seconds... 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:49.740 00:36:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.999 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.999 [2024-12-07 00:36:06.107747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.999 [2024-12-07 00:36:06.107814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:49.999 [2024-12-07 00:36:06.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.999 [2024-12-07 00:36:06.107859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:49.999 [2024-12-07 00:36:06.107875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.999 [2024-12-07 00:36:06.107891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:49.999 [2024-12-07 00:36:06.107907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.107922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.107938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.107953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.107980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.108972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.108987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.109010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.109027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.109047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.109069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.109099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.109114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.109129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.000 [2024-12-07 00:36:06.109144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.000 [2024-12-07 00:36:06.109159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.001 [2024-12-07 00:36:06.109827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.001 [2024-12-07 00:36:06.111068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:50.001 task offset: 82304 on job bdev=Nvme0n1 fails 00:08:50.001 00:08:50.001 Latency(us) 00:08:50.001 [2024-12-06T23:36:06.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.001 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.001 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:50.001 Verification LBA range: start 0x0 length 0x400 00:08:50.001 Nvme0n1 : 0.40 1603.27 100.20 160.33 0.00 35235.80 2633.58 33981.63 00:08:50.001 [2024-12-06T23:36:06.152Z] =================================================================================================================== 00:08:50.001 [2024-12-06T23:36:06.152Z] Total : 1603.27 100.20 160.33 0.00 35235.80 2633.58 33981.63 00:08:50.001 [2024-12-07 00:36:06.112976] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.001 [2024-12-07 00:36:06.113016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de8980 (9): Bad file descriptor 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.001 00:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:50.001 [2024-12-07 00:36:06.120063] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 137135 00:08:51.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (137135) - No such process 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:51.375 { 00:08:51.375 "params": { 00:08:51.375 "name": "Nvme$subsystem", 00:08:51.375 "trtype": "$TEST_TRANSPORT", 00:08:51.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.375 "adrfam": "ipv4", 00:08:51.375 "trsvcid": "$NVMF_PORT", 00:08:51.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.375 "hdgst": ${hdgst:-false}, 00:08:51.375 "ddgst": ${ddgst:-false} 00:08:51.375 }, 00:08:51.375 "method": "bdev_nvme_attach_controller" 00:08:51.375 } 00:08:51.375 EOF 00:08:51.375 )") 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:51.375 00:36:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:51.375 "params": { 00:08:51.375 "name": "Nvme0", 00:08:51.375 "trtype": "tcp", 00:08:51.375 "traddr": "10.0.0.2", 00:08:51.375 "adrfam": "ipv4", 00:08:51.375 "trsvcid": "4420", 00:08:51.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:51.375 "hdgst": false, 00:08:51.375 "ddgst": false 00:08:51.375 }, 00:08:51.375 "method": "bdev_nvme_attach_controller" 00:08:51.375 }' 00:08:51.375 [2024-12-07 00:36:07.167599] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:51.375 [2024-12-07 00:36:07.167676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137361 ] 00:08:51.375 [2024-12-07 00:36:07.237576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.375 [2024-12-07 00:36:07.285577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.375 Running I/O for 1 seconds... 00:08:52.754 1664.00 IOPS, 104.00 MiB/s 00:08:52.754 Latency(us) 00:08:52.754 [2024-12-06T23:36:08.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.754 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:52.754 Verification LBA range: start 0x0 length 0x400 00:08:52.754 Nvme0n1 : 1.03 1684.77 105.30 0.00 0.00 37370.79 6893.42 32816.55 00:08:52.754 [2024-12-06T23:36:08.905Z] =================================================================================================================== 00:08:52.754 [2024-12-06T23:36:08.905Z] Total : 1684.77 105.30 0.00 0.00 37370.79 6893.42 32816.55 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.754 rmmod nvme_tcp 00:08:52.754 rmmod nvme_fabrics 00:08:52.754 rmmod nvme_keyring 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 137037 ']' 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 137037 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 137037 ']' 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 137037 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137037 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137037' 00:08:52.754 killing process with pid 137037 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 137037 00:08:52.754 00:36:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 137037 00:08:53.012 [2024-12-07 00:36:09.022044] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.012 00:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:55.553 00:08:55.553 real 0m8.736s 00:08:55.553 user 0m19.067s 00:08:55.553 sys 0m2.731s 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 ************************************ 00:08:55.553 END TEST nvmf_host_management 00:08:55.553 ************************************ 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 ************************************ 00:08:55.553 START TEST nvmf_lvol 00:08:55.553 ************************************ 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:55.553 * Looking for test storage... 00:08:55.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.553 --rc genhtml_branch_coverage=1 00:08:55.553 --rc genhtml_function_coverage=1 00:08:55.553 --rc genhtml_legend=1 00:08:55.553 --rc geninfo_all_blocks=1 00:08:55.553 --rc geninfo_unexecuted_blocks=1 00:08:55.553 00:08:55.553 ' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.553 --rc genhtml_branch_coverage=1 00:08:55.553 --rc genhtml_function_coverage=1 00:08:55.553 --rc genhtml_legend=1 00:08:55.553 --rc geninfo_all_blocks=1 00:08:55.553 --rc geninfo_unexecuted_blocks=1 00:08:55.553 00:08:55.553 ' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.553 --rc genhtml_branch_coverage=1 00:08:55.553 --rc genhtml_function_coverage=1 00:08:55.553 --rc genhtml_legend=1 00:08:55.553 --rc geninfo_all_blocks=1 00:08:55.553 --rc geninfo_unexecuted_blocks=1 00:08:55.553 00:08:55.553 ' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.553 --rc genhtml_branch_coverage=1 00:08:55.553 --rc genhtml_function_coverage=1 00:08:55.553 --rc genhtml_legend=1 00:08:55.553 --rc geninfo_all_blocks=1 00:08:55.553 --rc geninfo_unexecuted_blocks=1 00:08:55.553 00:08:55.553 ' 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.553 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.554 00:36:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:57.460 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:57.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:57.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:57.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:57.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:57.461 00:08:57.461 --- 10.0.0.2 ping statistics --- 00:08:57.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.461 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:57.461 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:08:57.719 00:08:57.719 --- 10.0.0.1 ping statistics --- 00:08:57.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.719 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=140079 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 140079 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 140079 ']' 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.719 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.719 [2024-12-07 00:36:13.690586] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:57.719 [2024-12-07 00:36:13.690677] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.719 [2024-12-07 00:36:13.763203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.719 [2024-12-07 00:36:13.806189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.719 [2024-12-07 00:36:13.806246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.719 [2024-12-07 00:36:13.806280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.719 [2024-12-07 00:36:13.806292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.719 [2024-12-07 00:36:13.806301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.719 [2024-12-07 00:36:13.807877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.719 [2024-12-07 00:36:13.808019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.719 [2024-12-07 00:36:13.808020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.977 00:36:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.234 [2024-12-07 00:36:14.197576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.234 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.491 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:58.491 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.804 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:58.804 00:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:59.061 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:59.318 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ce9fc849-fdc3-4a5e-8155-a69deff6725c 00:08:59.318 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce9fc849-fdc3-4a5e-8155-a69deff6725c lvol 20 00:08:59.574 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=80c862cd-26fe-4245-8f86-5464bf767530 00:08:59.574 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.831 00:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80c862cd-26fe-4245-8f86-5464bf767530 00:09:00.088 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.346 [2024-12-07 00:36:16.445390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.346 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.603 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=140509 00:09:00.603 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:00.603 00:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:01.978 00:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 80c862cd-26fe-4245-8f86-5464bf767530 MY_SNAPSHOT 00:09:01.978 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b291a052-67c9-43ca-8ed8-2f54a24976c1 00:09:01.978 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 80c862cd-26fe-4245-8f86-5464bf767530 30 00:09:02.543 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b291a052-67c9-43ca-8ed8-2f54a24976c1 MY_CLONE 00:09:02.801 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=219c7df0-e8e0-4d36-99df-a7b8205ace72 00:09:02.801 00:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 219c7df0-e8e0-4d36-99df-a7b8205ace72 00:09:03.367 00:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 140509 00:09:11.480 Initializing NVMe Controllers 00:09:11.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:11.480 Controller IO queue size 128, less than required. 00:09:11.480 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:11.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:11.480 Initialization complete. Launching workers. 00:09:11.480 ======================================================== 00:09:11.480 Latency(us) 00:09:11.480 Device Information : IOPS MiB/s Average min max 00:09:11.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10499.59 41.01 12196.86 500.35 77604.20 00:09:11.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10435.29 40.76 12270.24 2252.31 73967.63 00:09:11.480 ======================================================== 00:09:11.480 Total : 20934.89 81.78 12233.43 500.35 77604.20 00:09:11.480 00:09:11.480 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.480 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80c862cd-26fe-4245-8f86-5464bf767530 00:09:11.739 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce9fc849-fdc3-4a5e-8155-a69deff6725c 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.998 00:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.998 rmmod nvme_tcp 00:09:11.998 rmmod nvme_fabrics 00:09:11.998 rmmod nvme_keyring 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 140079 ']' 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 140079 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 140079 ']' 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 140079 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140079 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140079' 00:09:11.998 killing process with pid 140079 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 140079 00:09:11.998 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 140079 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.259 00:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.803 00:09:14.803 real 0m19.223s 00:09:14.803 user 1m5.676s 00:09:14.803 sys 0m5.562s 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.803 ************************************ 00:09:14.803 END TEST nvmf_lvol 00:09:14.803 ************************************ 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.803 ************************************ 00:09:14.803 START TEST nvmf_lvs_grow 00:09:14.803 ************************************ 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:14.803 * Looking for test storage... 00:09:14.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.803 --rc genhtml_branch_coverage=1 00:09:14.803 --rc genhtml_function_coverage=1 00:09:14.803 --rc genhtml_legend=1 00:09:14.803 --rc geninfo_all_blocks=1 00:09:14.803 --rc geninfo_unexecuted_blocks=1 00:09:14.803 00:09:14.803 ' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.803 --rc genhtml_branch_coverage=1 00:09:14.803 --rc genhtml_function_coverage=1 00:09:14.803 --rc genhtml_legend=1 00:09:14.803 --rc geninfo_all_blocks=1 00:09:14.803 --rc geninfo_unexecuted_blocks=1 00:09:14.803 00:09:14.803 ' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.803 --rc genhtml_branch_coverage=1 00:09:14.803 --rc genhtml_function_coverage=1 00:09:14.803 --rc genhtml_legend=1 00:09:14.803 --rc geninfo_all_blocks=1 00:09:14.803 --rc geninfo_unexecuted_blocks=1 00:09:14.803 00:09:14.803 ' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.803 --rc genhtml_branch_coverage=1 00:09:14.803 --rc genhtml_function_coverage=1 00:09:14.803 --rc genhtml_legend=1 00:09:14.803 --rc geninfo_all_blocks=1 00:09:14.803 --rc geninfo_unexecuted_blocks=1 00:09:14.803 00:09:14.803 ' 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.803 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.804 00:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:16.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:16.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:16.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:16.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.713 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:09:16.972 00:09:16.972 --- 10.0.0.2 ping statistics --- 00:09:16.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.972 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:09:16.972 00:09:16.972 --- 10.0.0.1 ping statistics --- 00:09:16.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.972 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=143809 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 143809 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 143809 ']' 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.972 00:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.972 [2024-12-07 00:36:33.005802] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:16.973 [2024-12-07 00:36:33.005907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.973 [2024-12-07 00:36:33.079714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.973 [2024-12-07 00:36:33.121875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.973 [2024-12-07 00:36:33.121947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.973 [2024-12-07 00:36:33.121987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.973 [2024-12-07 00:36:33.122007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.973 [2024-12-07 00:36:33.122018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.231 [2024-12-07 00:36:33.122625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.231 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:17.490 [2024-12-07 00:36:33.508693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.490 ************************************ 00:09:17.490 START TEST lvs_grow_clean 00:09:17.490 ************************************ 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.490 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.748 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:17.748 00:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.006 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:18.006 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:18.006 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.263 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.263 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.263 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 lvol 150 00:09:18.520 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=27c6ec6a-161e-4274-bdb2-6e31c40a3d66 00:09:18.520 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.520 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.777 [2024-12-07 00:36:34.919436] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.777 [2024-12-07 00:36:34.919529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.777 true 00:09:19.043 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:19.043 00:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.300 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.300 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.558 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27c6ec6a-161e-4274-bdb2-6e31c40a3d66 00:09:19.817 00:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.075 [2024-12-07 00:36:36.006678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.075 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.333 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=144253 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 144253 /var/tmp/bdevperf.sock 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 144253 ']' 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.334 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:20.334 [2024-12-07 00:36:36.332758] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:20.334 [2024-12-07 00:36:36.332842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144253 ] 00:09:20.334 [2024-12-07 00:36:36.399002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.334 [2024-12-07 00:36:36.443222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.593 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.593 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:20.593 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.852 Nvme0n1 00:09:20.852 00:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.110 [ 00:09:21.110 { 00:09:21.110 "name": "Nvme0n1", 00:09:21.110 "aliases": [ 00:09:21.110 "27c6ec6a-161e-4274-bdb2-6e31c40a3d66" 00:09:21.110 ], 00:09:21.110 "product_name": "NVMe disk", 00:09:21.110 "block_size": 4096, 00:09:21.111 "num_blocks": 38912, 00:09:21.111 "uuid": "27c6ec6a-161e-4274-bdb2-6e31c40a3d66", 00:09:21.111 "numa_id": 0, 00:09:21.111 "assigned_rate_limits": { 00:09:21.111 "rw_ios_per_sec": 0, 00:09:21.111 "rw_mbytes_per_sec": 0, 00:09:21.111 "r_mbytes_per_sec": 0, 00:09:21.111 "w_mbytes_per_sec": 0 00:09:21.111 }, 00:09:21.111 "claimed": false, 00:09:21.111 "zoned": false, 00:09:21.111 "supported_io_types": { 00:09:21.111 "read": true, 00:09:21.111 "write": true, 00:09:21.111 "unmap": true, 00:09:21.111 "flush": true, 00:09:21.111 "reset": true, 00:09:21.111 "nvme_admin": true, 00:09:21.111 "nvme_io": true, 00:09:21.111 "nvme_io_md": false, 00:09:21.111 "write_zeroes": true, 00:09:21.111 "zcopy": false, 00:09:21.111 "get_zone_info": false, 00:09:21.111 "zone_management": false, 00:09:21.111 "zone_append": false, 00:09:21.111 "compare": true, 00:09:21.111 "compare_and_write": true, 00:09:21.111 "abort": true, 00:09:21.111 "seek_hole": false, 00:09:21.111 "seek_data": false, 00:09:21.111 "copy": true, 00:09:21.111 "nvme_iov_md": false 00:09:21.111 }, 00:09:21.111 "memory_domains": [ 00:09:21.111 { 00:09:21.111 "dma_device_id": "system", 00:09:21.111 "dma_device_type": 1 00:09:21.111 } 00:09:21.111 ], 00:09:21.111 "driver_specific": { 00:09:21.111 "nvme": [ 00:09:21.111 { 00:09:21.111 "trid": { 00:09:21.111 "trtype": "TCP", 00:09:21.111 "adrfam": "IPv4", 00:09:21.111 "traddr": "10.0.0.2", 00:09:21.111 "trsvcid": "4420", 00:09:21.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.111 }, 00:09:21.111 "ctrlr_data": { 00:09:21.111 "cntlid": 1, 00:09:21.111 "vendor_id": "0x8086", 00:09:21.111 "model_number": "SPDK bdev Controller", 00:09:21.111 "serial_number": "SPDK0", 00:09:21.111 "firmware_revision": "25.01", 00:09:21.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.111 "oacs": { 00:09:21.111 "security": 0, 00:09:21.111 "format": 0, 00:09:21.111 "firmware": 0, 00:09:21.111 "ns_manage": 0 00:09:21.111 }, 00:09:21.111 "multi_ctrlr": true, 00:09:21.111 "ana_reporting": false 00:09:21.111 }, 00:09:21.111 "vs": { 00:09:21.111 "nvme_version": "1.3" 00:09:21.111 }, 00:09:21.111 "ns_data": { 00:09:21.111 "id": 1, 00:09:21.111 "can_share": true 00:09:21.111 } 00:09:21.111 } 00:09:21.111 ], 00:09:21.111 "mp_policy": "active_passive" 00:09:21.111 } 00:09:21.111 } 00:09:21.111 ] 00:09:21.111 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=144388 00:09:21.111 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.111 00:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.369 Running I/O for 10 seconds... 00:09:22.314 Latency(us) 00:09:22.314 [2024-12-06T23:36:38.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.314 Nvme0n1 : 1.00 16003.00 62.51 0.00 0.00 0.00 0.00 0.00 00:09:22.314 [2024-12-06T23:36:38.465Z] =================================================================================================================== 00:09:22.314 [2024-12-06T23:36:38.465Z] Total : 16003.00 62.51 0.00 0.00 0.00 0.00 0.00 00:09:22.314 00:09:23.248 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:23.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.248 Nvme0n1 : 2.00 16225.00 63.38 0.00 0.00 0.00 0.00 0.00 00:09:23.248 [2024-12-06T23:36:39.399Z] =================================================================================================================== 00:09:23.248 [2024-12-06T23:36:39.399Z] Total : 16225.00 63.38 0.00 0.00 0.00 0.00 0.00 00:09:23.248 00:09:23.507 true 00:09:23.507 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:23.507 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.766 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.766 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.766 00:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 144388 00:09:24.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.333 Nvme0n1 : 3.00 16129.33 63.01 0.00 0.00 0.00 0.00 0.00 00:09:24.333 [2024-12-06T23:36:40.484Z] =================================================================================================================== 00:09:24.333 [2024-12-06T23:36:40.484Z] Total : 16129.33 63.01 0.00 0.00 0.00 0.00 0.00 00:09:24.333 00:09:25.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.271 Nvme0n1 : 4.00 16161.00 63.13 0.00 0.00 0.00 0.00 0.00 00:09:25.271 [2024-12-06T23:36:41.422Z] =================================================================================================================== 00:09:25.271 [2024-12-06T23:36:41.422Z] Total : 16161.00 63.13 0.00 0.00 0.00 0.00 0.00 00:09:25.271 00:09:26.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.207 Nvme0n1 : 5.00 16269.20 63.55 0.00 0.00 0.00 0.00 0.00 00:09:26.207 [2024-12-06T23:36:42.358Z] =================================================================================================================== 00:09:26.207 [2024-12-06T23:36:42.358Z] Total : 16269.20 63.55 0.00 0.00 0.00 0.00 0.00 00:09:26.207 00:09:27.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.582 Nvme0n1 : 6.00 16341.83 63.84 0.00 0.00 0.00 0.00 0.00 00:09:27.582 [2024-12-06T23:36:43.733Z] =================================================================================================================== 00:09:27.582 [2024-12-06T23:36:43.734Z] Total : 16341.83 63.84 0.00 0.00 0.00 0.00 0.00 00:09:27.583 00:09:28.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.519 Nvme0n1 : 7.00 16402.14 64.07 0.00 0.00 0.00 0.00 0.00 00:09:28.519 [2024-12-06T23:36:44.670Z] =================================================================================================================== 00:09:28.519 [2024-12-06T23:36:44.670Z] Total : 16402.14 64.07 0.00 0.00 0.00 0.00 0.00 00:09:28.519 00:09:29.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.455 Nvme0n1 : 8.00 16463.25 64.31 0.00 0.00 0.00 0.00 0.00 00:09:29.455 [2024-12-06T23:36:45.606Z] =================================================================================================================== 00:09:29.455 [2024-12-06T23:36:45.606Z] Total : 16463.25 64.31 0.00 0.00 0.00 0.00 0.00 00:09:29.455 00:09:30.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.389 Nvme0n1 : 9.00 16503.89 64.47 0.00 0.00 0.00 0.00 0.00 00:09:30.389 [2024-12-06T23:36:46.540Z] =================================================================================================================== 00:09:30.389 [2024-12-06T23:36:46.540Z] Total : 16503.89 64.47 0.00 0.00 0.00 0.00 0.00 00:09:30.389 00:09:31.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.324 Nvme0n1 : 10.00 16539.80 64.61 0.00 0.00 0.00 0.00 0.00 00:09:31.324 [2024-12-06T23:36:47.475Z] =================================================================================================================== 00:09:31.324 [2024-12-06T23:36:47.475Z] Total : 16539.80 64.61 0.00 0.00 0.00 0.00 0.00 00:09:31.324 00:09:31.324 00:09:31.324 Latency(us) 00:09:31.324 [2024-12-06T23:36:47.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.324 Nvme0n1 : 10.01 16546.98 64.64 0.00 0.00 7730.44 4029.25 18058.81 00:09:31.324 [2024-12-06T23:36:47.475Z] =================================================================================================================== 00:09:31.324 [2024-12-06T23:36:47.475Z] Total : 16546.98 64.64 0.00 0.00 7730.44 4029.25 18058.81 00:09:31.324 { 00:09:31.324 "results": [ 00:09:31.324 { 00:09:31.324 "job": "Nvme0n1", 00:09:31.324 "core_mask": "0x2", 00:09:31.324 "workload": "randwrite", 00:09:31.324 "status": "finished", 00:09:31.324 "queue_depth": 128, 00:09:31.324 "io_size": 4096, 00:09:31.324 "runtime": 10.009017, 00:09:31.324 "iops": 16546.979588505044, 00:09:31.324 "mibps": 64.63663901759783, 00:09:31.324 "io_failed": 0, 00:09:31.324 "io_timeout": 0, 00:09:31.324 "avg_latency_us": 7730.440425814448, 00:09:31.324 "min_latency_us": 4029.2503703703705, 00:09:31.324 "max_latency_us": 18058.80888888889 00:09:31.324 } 00:09:31.324 ], 00:09:31.324 "core_count": 1 00:09:31.324 } 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 144253 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 144253 ']' 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 144253 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144253 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144253' 00:09:31.324 killing process with pid 144253 00:09:31.324 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 144253 00:09:31.324 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.324 00:09:31.324 Latency(us) 00:09:31.324 [2024-12-06T23:36:47.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.324 [2024-12-06T23:36:47.475Z] =================================================================================================================== 00:09:31.324 [2024-12-06T23:36:47.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.325 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 144253 00:09:31.583 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.842 00:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.100 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:32.100 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.359 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.359 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:32.359 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.617 [2024-12-07 00:36:48.646762] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:32.617 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:32.876 request: 00:09:32.876 { 00:09:32.876 "uuid": "f99c0f16-dc2e-4886-b8dc-5d2c543e54d7", 00:09:32.876 "method": "bdev_lvol_get_lvstores", 00:09:32.876 "req_id": 1 00:09:32.876 } 00:09:32.876 Got JSON-RPC error response 00:09:32.876 response: 00:09:32.876 { 00:09:32.876 "code": -19, 00:09:32.876 "message": "No such device" 00:09:32.876 } 00:09:32.876 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:32.876 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.876 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.876 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.876 00:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.135 aio_bdev 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27c6ec6a-161e-4274-bdb2-6e31c40a3d66 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=27c6ec6a-161e-4274-bdb2-6e31c40a3d66 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.135 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.394 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 27c6ec6a-161e-4274-bdb2-6e31c40a3d66 -t 2000 00:09:33.653 [ 00:09:33.653 { 00:09:33.653 "name": "27c6ec6a-161e-4274-bdb2-6e31c40a3d66", 00:09:33.653 "aliases": [ 00:09:33.653 "lvs/lvol" 00:09:33.653 ], 00:09:33.653 "product_name": "Logical Volume", 00:09:33.653 "block_size": 4096, 00:09:33.653 "num_blocks": 38912, 00:09:33.653 "uuid": "27c6ec6a-161e-4274-bdb2-6e31c40a3d66", 00:09:33.653 "assigned_rate_limits": { 00:09:33.653 "rw_ios_per_sec": 0, 00:09:33.653 "rw_mbytes_per_sec": 0, 00:09:33.653 "r_mbytes_per_sec": 0, 00:09:33.653 "w_mbytes_per_sec": 0 00:09:33.653 }, 00:09:33.653 "claimed": false, 00:09:33.653 "zoned": false, 00:09:33.653 "supported_io_types": { 00:09:33.653 "read": true, 00:09:33.653 "write": true, 00:09:33.653 "unmap": true, 00:09:33.653 "flush": false, 00:09:33.653 "reset": true, 00:09:33.653 "nvme_admin": false, 00:09:33.653 "nvme_io": false, 00:09:33.653 "nvme_io_md": false, 00:09:33.653 "write_zeroes": true, 00:09:33.653 "zcopy": false, 00:09:33.653 "get_zone_info": false, 00:09:33.653 "zone_management": false, 00:09:33.653 "zone_append": false, 00:09:33.653 "compare": false, 00:09:33.653 "compare_and_write": false, 00:09:33.653 "abort": false, 00:09:33.653 "seek_hole": true, 00:09:33.653 "seek_data": true, 00:09:33.653 "copy": false, 00:09:33.653 "nvme_iov_md": false 00:09:33.653 }, 00:09:33.653 "driver_specific": { 00:09:33.653 "lvol": { 00:09:33.653 "lvol_store_uuid": "f99c0f16-dc2e-4886-b8dc-5d2c543e54d7", 00:09:33.653 "base_bdev": "aio_bdev", 00:09:33.653 "thin_provision": false, 00:09:33.653 "num_allocated_clusters": 38, 00:09:33.653 "snapshot": false, 00:09:33.653 "clone": false, 00:09:33.653 "esnap_clone": false 00:09:33.653 } 00:09:33.653 } 00:09:33.653 } 00:09:33.653 ] 00:09:33.912 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:33.912 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:33.912 00:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:34.170 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:34.170 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:34.170 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:34.429 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:34.429 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27c6ec6a-161e-4274-bdb2-6e31c40a3d66 00:09:34.687 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f99c0f16-dc2e-4886-b8dc-5d2c543e54d7 00:09:34.945 00:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.203 00:09:35.203 real 0m17.675s 00:09:35.203 user 0m17.203s 00:09:35.203 sys 0m1.839s 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:35.203 ************************************ 00:09:35.203 END TEST lvs_grow_clean 00:09:35.203 ************************************ 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.203 ************************************ 00:09:35.203 START TEST lvs_grow_dirty 00:09:35.203 ************************************ 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:35.203 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:35.204 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:35.204 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:35.204 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.204 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.204 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.461 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:35.461 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:35.719 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:35.719 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:35.719 00:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:35.978 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:35.978 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:35.978 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 lvol 150 00:09:36.546 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:36.546 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.546 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:36.546 [2024-12-07 00:36:52.650439] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:36.546 [2024-12-07 00:36:52.650540] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:36.546 true 00:09:36.546 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:36.546 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:36.805 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:36.805 00:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:37.063 00:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:37.629 00:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:37.629 [2024-12-07 00:36:53.721585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.629 00:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=146397 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 146397 /var/tmp/bdevperf.sock 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 146397 ']' 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:37.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.887 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.145 [2024-12-07 00:36:54.057571] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:38.145 [2024-12-07 00:36:54.057653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146397 ] 00:09:38.145 [2024-12-07 00:36:54.124015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.145 [2024-12-07 00:36:54.168805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.145 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.145 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:38.145 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:38.710 Nvme0n1 00:09:38.710 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.710 [ 00:09:38.710 { 00:09:38.711 "name": "Nvme0n1", 00:09:38.711 "aliases": [ 00:09:38.711 "f776e1c1-3181-43a6-b11a-0659f0daed7a" 00:09:38.711 ], 00:09:38.711 "product_name": "NVMe disk", 00:09:38.711 "block_size": 4096, 00:09:38.711 "num_blocks": 38912, 00:09:38.711 "uuid": "f776e1c1-3181-43a6-b11a-0659f0daed7a", 00:09:38.711 "numa_id": 0, 00:09:38.711 "assigned_rate_limits": { 00:09:38.711 "rw_ios_per_sec": 0, 00:09:38.711 "rw_mbytes_per_sec": 0, 00:09:38.711 "r_mbytes_per_sec": 0, 00:09:38.711 "w_mbytes_per_sec": 0 00:09:38.711 }, 00:09:38.711 "claimed": false, 00:09:38.711 "zoned": false, 00:09:38.711 "supported_io_types": { 00:09:38.711 "read": true, 00:09:38.711 "write": true, 00:09:38.711 "unmap": true, 00:09:38.711 "flush": true, 00:09:38.711 "reset": true, 00:09:38.711 "nvme_admin": true, 00:09:38.711 "nvme_io": true, 00:09:38.711 "nvme_io_md": false, 00:09:38.711 "write_zeroes": true, 00:09:38.711 "zcopy": false, 00:09:38.711 "get_zone_info": false, 00:09:38.711 "zone_management": false, 00:09:38.711 "zone_append": false, 00:09:38.711 "compare": true, 00:09:38.711 "compare_and_write": true, 00:09:38.711 "abort": true, 00:09:38.711 "seek_hole": false, 00:09:38.711 "seek_data": false, 00:09:38.711 "copy": true, 00:09:38.711 "nvme_iov_md": false 00:09:38.711 }, 00:09:38.711 "memory_domains": [ 00:09:38.711 { 00:09:38.711 "dma_device_id": "system", 00:09:38.711 "dma_device_type": 1 00:09:38.711 } 00:09:38.711 ], 00:09:38.711 "driver_specific": { 00:09:38.711 "nvme": [ 00:09:38.711 { 00:09:38.711 "trid": { 00:09:38.711 "trtype": "TCP", 00:09:38.711 "adrfam": "IPv4", 00:09:38.711 "traddr": "10.0.0.2", 00:09:38.711 "trsvcid": "4420", 00:09:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:38.711 }, 00:09:38.711 "ctrlr_data": { 00:09:38.711 "cntlid": 1, 00:09:38.711 "vendor_id": "0x8086", 00:09:38.711 "model_number": "SPDK bdev Controller", 00:09:38.711 "serial_number": "SPDK0", 00:09:38.711 "firmware_revision": "25.01", 00:09:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.711 "oacs": { 00:09:38.711 "security": 0, 00:09:38.711 "format": 0, 00:09:38.711 "firmware": 0, 00:09:38.711 "ns_manage": 0 00:09:38.711 }, 00:09:38.711 "multi_ctrlr": true, 00:09:38.711 "ana_reporting": false 00:09:38.711 }, 00:09:38.711 "vs": { 00:09:38.711 "nvme_version": "1.3" 00:09:38.711 }, 00:09:38.711 "ns_data": { 00:09:38.711 "id": 1, 00:09:38.711 "can_share": true 00:09:38.711 } 00:09:38.711 } 00:09:38.711 ], 00:09:38.711 "mp_policy": "active_passive" 00:09:38.711 } 00:09:38.711 } 00:09:38.711 ] 00:09:38.969 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=146452 00:09:38.969 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.969 00:36:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.969 Running I/O for 10 seconds... 00:09:39.901 Latency(us) 00:09:39.901 [2024-12-06T23:36:56.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.901 Nvme0n1 : 1.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:09:39.901 [2024-12-06T23:36:56.052Z] =================================================================================================================== 00:09:39.901 [2024-12-06T23:36:56.052Z] Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:09:39.901 00:09:40.835 00:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:41.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.093 Nvme0n1 : 2.00 15494.50 60.53 0.00 0.00 0.00 0.00 0.00 00:09:41.093 [2024-12-06T23:36:57.244Z] =================================================================================================================== 00:09:41.093 [2024-12-06T23:36:57.244Z] Total : 15494.50 60.53 0.00 0.00 0.00 0.00 0.00 00:09:41.093 00:09:41.093 true 00:09:41.093 00:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:41.093 00:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:41.350 00:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:41.350 00:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:41.350 00:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 146452 00:09:41.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.916 Nvme0n1 : 3.00 15536.67 60.69 0.00 0.00 0.00 0.00 0.00 00:09:41.916 [2024-12-06T23:36:58.067Z] =================================================================================================================== 00:09:41.916 [2024-12-06T23:36:58.067Z] Total : 15536.67 60.69 0.00 0.00 0.00 0.00 0.00 00:09:41.916 00:09:42.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.852 Nvme0n1 : 4.00 15589.50 60.90 0.00 0.00 0.00 0.00 0.00 00:09:42.852 [2024-12-06T23:36:59.003Z] =================================================================================================================== 00:09:42.852 [2024-12-06T23:36:59.003Z] Total : 15589.50 60.90 0.00 0.00 0.00 0.00 0.00 00:09:42.852 00:09:44.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.228 Nvme0n1 : 5.00 15659.60 61.17 0.00 0.00 0.00 0.00 0.00 00:09:44.228 [2024-12-06T23:37:00.379Z] =================================================================================================================== 00:09:44.228 [2024-12-06T23:37:00.379Z] Total : 15659.60 61.17 0.00 0.00 0.00 0.00 0.00 00:09:44.228 00:09:45.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.164 Nvme0n1 : 6.00 15695.50 61.31 0.00 0.00 0.00 0.00 0.00 00:09:45.164 [2024-12-06T23:37:01.315Z] =================================================================================================================== 00:09:45.164 [2024-12-06T23:37:01.315Z] Total : 15695.50 61.31 0.00 0.00 0.00 0.00 0.00 00:09:45.164 00:09:46.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.101 Nvme0n1 : 7.00 15739.29 61.48 0.00 0.00 0.00 0.00 0.00 00:09:46.101 [2024-12-06T23:37:02.252Z] =================================================================================================================== 00:09:46.101 [2024-12-06T23:37:02.252Z] Total : 15739.29 61.48 0.00 0.00 0.00 0.00 0.00 00:09:46.101 00:09:47.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.038 Nvme0n1 : 8.00 15772.12 61.61 0.00 0.00 0.00 0.00 0.00 00:09:47.038 [2024-12-06T23:37:03.189Z] =================================================================================================================== 00:09:47.038 [2024-12-06T23:37:03.189Z] Total : 15772.12 61.61 0.00 0.00 0.00 0.00 0.00 00:09:47.038 00:09:47.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.973 Nvme0n1 : 9.00 15797.67 61.71 0.00 0.00 0.00 0.00 0.00 00:09:47.973 [2024-12-06T23:37:04.124Z] =================================================================================================================== 00:09:47.973 [2024-12-06T23:37:04.124Z] Total : 15797.67 61.71 0.00 0.00 0.00 0.00 0.00 00:09:47.973 00:09:48.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.910 Nvme0n1 : 10.00 15818.10 61.79 0.00 0.00 0.00 0.00 0.00 00:09:48.910 [2024-12-06T23:37:05.061Z] =================================================================================================================== 00:09:48.910 [2024-12-06T23:37:05.061Z] Total : 15818.10 61.79 0.00 0.00 0.00 0.00 0.00 00:09:48.910 00:09:48.910 00:09:48.910 Latency(us) 00:09:48.910 [2024-12-06T23:37:05.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.910 Nvme0n1 : 10.00 15826.04 61.82 0.00 0.00 8083.30 3155.44 15728.64 00:09:48.910 [2024-12-06T23:37:05.061Z] =================================================================================================================== 00:09:48.910 [2024-12-06T23:37:05.061Z] Total : 15826.04 61.82 0.00 0.00 8083.30 3155.44 15728.64 00:09:48.910 { 00:09:48.910 "results": [ 00:09:48.910 { 00:09:48.910 "job": "Nvme0n1", 00:09:48.910 "core_mask": "0x2", 00:09:48.910 "workload": "randwrite", 00:09:48.910 "status": "finished", 00:09:48.910 "queue_depth": 128, 00:09:48.910 "io_size": 4096, 00:09:48.910 "runtime": 10.00307, 00:09:48.910 "iops": 15826.041405288577, 00:09:48.910 "mibps": 61.820474239408505, 00:09:48.910 "io_failed": 0, 00:09:48.910 "io_timeout": 0, 00:09:48.910 "avg_latency_us": 8083.29777858258, 00:09:48.910 "min_latency_us": 3155.437037037037, 00:09:48.910 "max_latency_us": 15728.64 00:09:48.910 } 00:09:48.910 ], 00:09:48.910 "core_count": 1 00:09:48.910 } 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 146397 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 146397 ']' 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 146397 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.910 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146397 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146397' 00:09:49.168 killing process with pid 146397 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 146397 00:09:49.168 Received shutdown signal, test time was about 10.000000 seconds 00:09:49.168 00:09:49.168 Latency(us) 00:09:49.168 [2024-12-06T23:37:05.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.168 [2024-12-06T23:37:05.319Z] =================================================================================================================== 00:09:49.168 [2024-12-06T23:37:05.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 146397 00:09:49.168 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.426 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.684 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:49.684 00:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.943 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.943 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.943 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 143809 00:09:49.943 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 143809 00:09:50.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 143809 Killed "${NVMF_APP[@]}" "$@" 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=147794 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 147794 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 147794 ']' 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.202 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.202 [2024-12-07 00:37:06.159192] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:50.202 [2024-12-07 00:37:06.159286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.202 [2024-12-07 00:37:06.236573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.202 [2024-12-07 00:37:06.283612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.202 [2024-12-07 00:37:06.283665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.202 [2024-12-07 00:37:06.283693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.202 [2024-12-07 00:37:06.283705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.202 [2024-12-07 00:37:06.283714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.202 [2024-12-07 00:37:06.284333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.461 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.720 [2024-12-07 00:37:06.680320] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.720 [2024-12-07 00:37:06.680464] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.720 [2024-12-07 00:37:06.680513] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.720 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.978 00:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f776e1c1-3181-43a6-b11a-0659f0daed7a -t 2000 00:09:51.237 [ 00:09:51.237 { 00:09:51.237 "name": "f776e1c1-3181-43a6-b11a-0659f0daed7a", 00:09:51.237 "aliases": [ 00:09:51.237 "lvs/lvol" 00:09:51.237 ], 00:09:51.237 "product_name": "Logical Volume", 00:09:51.237 "block_size": 4096, 00:09:51.237 "num_blocks": 38912, 00:09:51.237 "uuid": "f776e1c1-3181-43a6-b11a-0659f0daed7a", 00:09:51.237 "assigned_rate_limits": { 00:09:51.237 "rw_ios_per_sec": 0, 00:09:51.237 "rw_mbytes_per_sec": 0, 00:09:51.237 "r_mbytes_per_sec": 0, 00:09:51.237 "w_mbytes_per_sec": 0 00:09:51.237 }, 00:09:51.237 "claimed": false, 00:09:51.237 "zoned": false, 00:09:51.237 "supported_io_types": { 00:09:51.237 "read": true, 00:09:51.237 "write": true, 00:09:51.237 "unmap": true, 00:09:51.237 "flush": false, 00:09:51.237 "reset": true, 00:09:51.237 "nvme_admin": false, 00:09:51.237 "nvme_io": false, 00:09:51.237 "nvme_io_md": false, 00:09:51.237 "write_zeroes": true, 00:09:51.237 "zcopy": false, 00:09:51.237 "get_zone_info": false, 00:09:51.237 "zone_management": false, 00:09:51.237 "zone_append": false, 00:09:51.237 "compare": false, 00:09:51.237 "compare_and_write": false, 00:09:51.237 "abort": false, 00:09:51.237 "seek_hole": true, 00:09:51.237 "seek_data": true, 00:09:51.237 "copy": false, 00:09:51.237 "nvme_iov_md": false 00:09:51.237 }, 00:09:51.237 "driver_specific": { 00:09:51.237 "lvol": { 00:09:51.237 "lvol_store_uuid": "55f93570-7aa3-45d8-93e0-d086c72f25c1", 00:09:51.237 "base_bdev": "aio_bdev", 00:09:51.237 "thin_provision": false, 00:09:51.237 "num_allocated_clusters": 38, 00:09:51.237 "snapshot": false, 00:09:51.237 "clone": false, 00:09:51.237 "esnap_clone": false 00:09:51.237 } 00:09:51.237 } 00:09:51.237 } 00:09:51.237 ] 00:09:51.237 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:51.237 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:51.237 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.496 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.496 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:51.496 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.755 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.755 00:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.013 [2024-12-07 00:37:08.033853] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:52.013 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:52.271 request: 00:09:52.271 { 00:09:52.271 "uuid": "55f93570-7aa3-45d8-93e0-d086c72f25c1", 00:09:52.271 "method": "bdev_lvol_get_lvstores", 00:09:52.271 "req_id": 1 00:09:52.271 } 00:09:52.271 Got JSON-RPC error response 00:09:52.271 response: 00:09:52.271 { 00:09:52.271 "code": -19, 00:09:52.271 "message": "No such device" 00:09:52.271 } 00:09:52.271 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:52.271 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.271 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:52.271 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.271 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.529 aio_bdev 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.529 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:52.787 00:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f776e1c1-3181-43a6-b11a-0659f0daed7a -t 2000 00:09:53.058 [ 00:09:53.058 { 00:09:53.058 "name": "f776e1c1-3181-43a6-b11a-0659f0daed7a", 00:09:53.058 "aliases": [ 00:09:53.058 "lvs/lvol" 00:09:53.058 ], 00:09:53.058 "product_name": "Logical Volume", 00:09:53.058 "block_size": 4096, 00:09:53.058 "num_blocks": 38912, 00:09:53.058 "uuid": "f776e1c1-3181-43a6-b11a-0659f0daed7a", 00:09:53.058 "assigned_rate_limits": { 00:09:53.058 "rw_ios_per_sec": 0, 00:09:53.058 "rw_mbytes_per_sec": 0, 00:09:53.058 "r_mbytes_per_sec": 0, 00:09:53.058 "w_mbytes_per_sec": 0 00:09:53.058 }, 00:09:53.058 "claimed": false, 00:09:53.058 "zoned": false, 00:09:53.058 "supported_io_types": { 00:09:53.058 "read": true, 00:09:53.058 "write": true, 00:09:53.058 "unmap": true, 00:09:53.058 "flush": false, 00:09:53.058 "reset": true, 00:09:53.058 "nvme_admin": false, 00:09:53.058 "nvme_io": false, 00:09:53.058 "nvme_io_md": false, 00:09:53.058 "write_zeroes": true, 00:09:53.058 "zcopy": false, 00:09:53.058 "get_zone_info": false, 00:09:53.058 "zone_management": false, 00:09:53.058 "zone_append": false, 00:09:53.058 "compare": false, 00:09:53.058 "compare_and_write": false, 00:09:53.058 "abort": false, 00:09:53.058 "seek_hole": true, 00:09:53.058 "seek_data": true, 00:09:53.058 "copy": false, 00:09:53.058 "nvme_iov_md": false 00:09:53.058 }, 00:09:53.058 "driver_specific": { 00:09:53.058 "lvol": { 00:09:53.058 "lvol_store_uuid": "55f93570-7aa3-45d8-93e0-d086c72f25c1", 00:09:53.058 "base_bdev": "aio_bdev", 00:09:53.058 "thin_provision": false, 00:09:53.058 "num_allocated_clusters": 38, 00:09:53.058 "snapshot": false, 00:09:53.058 "clone": false, 00:09:53.058 "esnap_clone": false 00:09:53.058 } 00:09:53.058 } 00:09:53.058 } 00:09:53.058 ] 00:09:53.058 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:53.058 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:53.058 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:53.319 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:53.319 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:53.319 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:53.576 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:53.576 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f776e1c1-3181-43a6-b11a-0659f0daed7a 00:09:53.833 00:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55f93570-7aa3-45d8-93e0-d086c72f25c1 00:09:54.400 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:54.400 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:54.659 00:09:54.659 real 0m19.293s 00:09:54.659 user 0m49.070s 00:09:54.659 sys 0m4.431s 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.659 ************************************ 00:09:54.659 END TEST lvs_grow_dirty 00:09:54.659 ************************************ 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:54.659 nvmf_trace.0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.659 rmmod nvme_tcp 00:09:54.659 rmmod nvme_fabrics 00:09:54.659 rmmod nvme_keyring 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 147794 ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 147794 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 147794 ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 147794 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147794 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147794' 00:09:54.659 killing process with pid 147794 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 147794 00:09:54.659 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 147794 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.919 00:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.836 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.836 00:09:56.836 real 0m42.550s 00:09:56.836 user 1m12.306s 00:09:56.836 sys 0m8.374s 00:09:56.836 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.836 00:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.836 ************************************ 00:09:56.836 END TEST nvmf_lvs_grow 00:09:56.836 ************************************ 00:09:57.097 00:37:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:57.097 00:37:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.097 00:37:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.097 00:37:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.097 ************************************ 00:09:57.097 START TEST nvmf_bdev_io_wait 00:09:57.097 ************************************ 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:57.097 * Looking for test storage... 00:09:57.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.097 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.098 --rc genhtml_branch_coverage=1 00:09:57.098 --rc genhtml_function_coverage=1 00:09:57.098 --rc genhtml_legend=1 00:09:57.098 --rc geninfo_all_blocks=1 00:09:57.098 --rc geninfo_unexecuted_blocks=1 00:09:57.098 00:09:57.098 ' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.098 --rc genhtml_branch_coverage=1 00:09:57.098 --rc genhtml_function_coverage=1 00:09:57.098 --rc genhtml_legend=1 00:09:57.098 --rc geninfo_all_blocks=1 00:09:57.098 --rc geninfo_unexecuted_blocks=1 00:09:57.098 00:09:57.098 ' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.098 --rc genhtml_branch_coverage=1 00:09:57.098 --rc genhtml_function_coverage=1 00:09:57.098 --rc genhtml_legend=1 00:09:57.098 --rc geninfo_all_blocks=1 00:09:57.098 --rc geninfo_unexecuted_blocks=1 00:09:57.098 00:09:57.098 ' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.098 --rc genhtml_branch_coverage=1 00:09:57.098 --rc genhtml_function_coverage=1 00:09:57.098 --rc genhtml_legend=1 00:09:57.098 --rc geninfo_all_blocks=1 00:09:57.098 --rc geninfo_unexecuted_blocks=1 00:09:57.098 00:09:57.098 ' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.098 00:37:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.646 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:59.647 00:09:59.647 --- 10.0.0.2 ping statistics --- 00:09:59.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.647 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:09:59.647 00:09:59.647 --- 10.0.0.1 ping statistics --- 00:09:59.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.647 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=150459 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 150459 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 150459 ']' 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.647 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.647 [2024-12-07 00:37:15.653710] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:59.647 [2024-12-07 00:37:15.653801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.647 [2024-12-07 00:37:15.727148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.647 [2024-12-07 00:37:15.778487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.647 [2024-12-07 00:37:15.778556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.647 [2024-12-07 00:37:15.778570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.647 [2024-12-07 00:37:15.778580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.647 [2024-12-07 00:37:15.778590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.647 [2024-12-07 00:37:15.780230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.647 [2024-12-07 00:37:15.780291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.647 [2024-12-07 00:37:15.780341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.647 [2024-12-07 00:37:15.780345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 [2024-12-07 00:37:16.005811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 Malloc0 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.912 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.912 [2024-12-07 00:37:16.059060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=150482 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=150484 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=150486 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.173 { 00:10:00.173 "params": { 00:10:00.173 "name": "Nvme$subsystem", 00:10:00.173 "trtype": "$TEST_TRANSPORT", 00:10:00.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.173 "adrfam": "ipv4", 00:10:00.173 "trsvcid": "$NVMF_PORT", 00:10:00.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.173 "hdgst": ${hdgst:-false}, 00:10:00.173 "ddgst": ${ddgst:-false} 00:10:00.173 }, 00:10:00.173 "method": "bdev_nvme_attach_controller" 00:10:00.173 } 00:10:00.173 EOF 00:10:00.173 )") 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=150488 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.173 { 00:10:00.173 "params": { 00:10:00.173 "name": "Nvme$subsystem", 00:10:00.173 "trtype": "$TEST_TRANSPORT", 00:10:00.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.173 "adrfam": "ipv4", 00:10:00.173 "trsvcid": "$NVMF_PORT", 00:10:00.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.173 "hdgst": ${hdgst:-false}, 00:10:00.173 "ddgst": ${ddgst:-false} 00:10:00.173 }, 00:10:00.173 "method": "bdev_nvme_attach_controller" 00:10:00.173 } 00:10:00.173 EOF 00:10:00.173 )") 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.173 { 00:10:00.173 "params": { 00:10:00.173 "name": "Nvme$subsystem", 00:10:00.173 "trtype": "$TEST_TRANSPORT", 00:10:00.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.173 "adrfam": "ipv4", 00:10:00.173 "trsvcid": "$NVMF_PORT", 00:10:00.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.173 "hdgst": ${hdgst:-false}, 00:10:00.173 "ddgst": ${ddgst:-false} 00:10:00.173 }, 00:10:00.173 "method": "bdev_nvme_attach_controller" 00:10:00.173 } 00:10:00.173 EOF 00:10:00.173 )") 00:10:00.173 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.174 { 00:10:00.174 "params": { 00:10:00.174 "name": "Nvme$subsystem", 00:10:00.174 "trtype": "$TEST_TRANSPORT", 00:10:00.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.174 "adrfam": "ipv4", 00:10:00.174 "trsvcid": "$NVMF_PORT", 00:10:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.174 "hdgst": ${hdgst:-false}, 00:10:00.174 "ddgst": ${ddgst:-false} 00:10:00.174 }, 00:10:00.174 "method": "bdev_nvme_attach_controller" 00:10:00.174 } 00:10:00.174 EOF 00:10:00.174 )") 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 150482 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.174 "params": { 00:10:00.174 "name": "Nvme1", 00:10:00.174 "trtype": "tcp", 00:10:00.174 "traddr": "10.0.0.2", 00:10:00.174 "adrfam": "ipv4", 00:10:00.174 "trsvcid": "4420", 00:10:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.174 "hdgst": false, 00:10:00.174 "ddgst": false 00:10:00.174 }, 00:10:00.174 "method": "bdev_nvme_attach_controller" 00:10:00.174 }' 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.174 "params": { 00:10:00.174 "name": "Nvme1", 00:10:00.174 "trtype": "tcp", 00:10:00.174 "traddr": "10.0.0.2", 00:10:00.174 "adrfam": "ipv4", 00:10:00.174 "trsvcid": "4420", 00:10:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.174 "hdgst": false, 00:10:00.174 "ddgst": false 00:10:00.174 }, 00:10:00.174 "method": "bdev_nvme_attach_controller" 00:10:00.174 }' 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.174 "params": { 00:10:00.174 "name": "Nvme1", 00:10:00.174 "trtype": "tcp", 00:10:00.174 "traddr": "10.0.0.2", 00:10:00.174 "adrfam": "ipv4", 00:10:00.174 "trsvcid": "4420", 00:10:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.174 "hdgst": false, 00:10:00.174 "ddgst": false 00:10:00.174 }, 00:10:00.174 "method": "bdev_nvme_attach_controller" 00:10:00.174 }' 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.174 00:37:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.174 "params": { 00:10:00.174 "name": "Nvme1", 00:10:00.174 "trtype": "tcp", 00:10:00.174 "traddr": "10.0.0.2", 00:10:00.174 "adrfam": "ipv4", 00:10:00.174 "trsvcid": "4420", 00:10:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.174 "hdgst": false, 00:10:00.174 "ddgst": false 00:10:00.174 }, 00:10:00.174 "method": "bdev_nvme_attach_controller" 00:10:00.174 }' 00:10:00.174 [2024-12-07 00:37:16.109812] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:00.174 [2024-12-07 00:37:16.109812] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:00.174 [2024-12-07 00:37:16.109812] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:00.174 [2024-12-07 00:37:16.109815] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:00.174 [2024-12-07 00:37:16.109901] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 00:37:16.109901] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 00:37:16.109901] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 00:37:16.109902] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:00.174 --proc-type=auto ] 00:10:00.174 --proc-type=auto ] 00:10:00.174 --proc-type=auto ] 00:10:00.174 [2024-12-07 00:37:16.284757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.437 [2024-12-07 00:37:16.328211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.437 [2024-12-07 00:37:16.383375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.437 [2024-12-07 00:37:16.425255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:00.437 [2024-12-07 00:37:16.480738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.437 [2024-12-07 00:37:16.522737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:00.437 [2024-12-07 00:37:16.581739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.729 [2024-12-07 00:37:16.623252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:00.729 Running I/O for 1 seconds... 00:10:00.729 Running I/O for 1 seconds... 00:10:00.729 Running I/O for 1 seconds... 00:10:00.729 Running I/O for 1 seconds... 00:10:01.729 6666.00 IOPS, 26.04 MiB/s 00:10:01.729 Latency(us) 00:10:01.729 [2024-12-06T23:37:17.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.729 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:01.729 Nvme1n1 : 1.02 6675.82 26.08 0.00 0.00 19059.06 7767.23 28350.39 00:10:01.729 [2024-12-06T23:37:17.880Z] =================================================================================================================== 00:10:01.729 [2024-12-06T23:37:17.880Z] Total : 6675.82 26.08 0.00 0.00 19059.06 7767.23 28350.39 00:10:01.729 8753.00 IOPS, 34.19 MiB/s 00:10:01.729 Latency(us) 00:10:01.729 [2024-12-06T23:37:17.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.729 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:01.729 Nvme1n1 : 1.01 8790.72 34.34 0.00 0.00 14482.13 8980.86 25243.50 00:10:01.729 [2024-12-06T23:37:17.880Z] =================================================================================================================== 00:10:01.729 [2024-12-06T23:37:17.880Z] Total : 8790.72 34.34 0.00 0.00 14482.13 8980.86 25243.50 00:10:01.729 6600.00 IOPS, 25.78 MiB/s 00:10:01.729 Latency(us) 00:10:01.729 [2024-12-06T23:37:17.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.729 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:01.729 Nvme1n1 : 1.01 6707.89 26.20 0.00 0.00 19033.57 3276.80 43884.85 00:10:01.729 [2024-12-06T23:37:17.880Z] =================================================================================================================== 00:10:01.729 [2024-12-06T23:37:17.880Z] Total : 6707.89 26.20 0.00 0.00 19033.57 3276.80 43884.85 00:10:01.729 183296.00 IOPS, 716.00 MiB/s 00:10:01.729 Latency(us) 00:10:01.729 [2024-12-06T23:37:17.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.729 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:01.729 Nvme1n1 : 1.00 182949.44 714.65 0.00 0.00 695.82 286.72 1881.13 00:10:01.729 [2024-12-06T23:37:17.880Z] =================================================================================================================== 00:10:01.729 [2024-12-06T23:37:17.880Z] Total : 182949.44 714.65 0.00 0.00 695.82 286.72 1881.13 00:10:01.729 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 150484 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 150486 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 150488 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.040 00:37:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.040 rmmod nvme_tcp 00:10:02.040 rmmod nvme_fabrics 00:10:02.040 rmmod nvme_keyring 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 150459 ']' 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 150459 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 150459 ']' 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 150459 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150459 00:10:02.040 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.041 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.041 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150459' 00:10:02.041 killing process with pid 150459 00:10:02.041 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 150459 00:10:02.041 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 150459 00:10:02.346 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.347 00:37:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.400 00:10:04.400 real 0m7.283s 00:10:04.400 user 0m15.397s 00:10:04.400 sys 0m3.578s 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.400 ************************************ 00:10:04.400 END TEST nvmf_bdev_io_wait 00:10:04.400 ************************************ 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.400 ************************************ 00:10:04.400 START TEST nvmf_queue_depth 00:10:04.400 ************************************ 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:04.400 * Looking for test storage... 00:10:04.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.400 --rc genhtml_branch_coverage=1 00:10:04.400 --rc genhtml_function_coverage=1 00:10:04.400 --rc genhtml_legend=1 00:10:04.400 --rc geninfo_all_blocks=1 00:10:04.400 --rc geninfo_unexecuted_blocks=1 00:10:04.400 00:10:04.400 ' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.400 --rc genhtml_branch_coverage=1 00:10:04.400 --rc genhtml_function_coverage=1 00:10:04.400 --rc genhtml_legend=1 00:10:04.400 --rc geninfo_all_blocks=1 00:10:04.400 --rc geninfo_unexecuted_blocks=1 00:10:04.400 00:10:04.400 ' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.400 --rc genhtml_branch_coverage=1 00:10:04.400 --rc genhtml_function_coverage=1 00:10:04.400 --rc genhtml_legend=1 00:10:04.400 --rc geninfo_all_blocks=1 00:10:04.400 --rc geninfo_unexecuted_blocks=1 00:10:04.400 00:10:04.400 ' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.400 --rc genhtml_branch_coverage=1 00:10:04.400 --rc genhtml_function_coverage=1 00:10:04.400 --rc genhtml_legend=1 00:10:04.400 --rc geninfo_all_blocks=1 00:10:04.400 --rc geninfo_unexecuted_blocks=1 00:10:04.400 00:10:04.400 ' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.400 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.401 00:37:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.098 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:10:07.099 00:10:07.099 --- 10.0.0.2 ping statistics --- 00:10:07.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.099 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:10:07.099 00:10:07.099 --- 10.0.0.1 ping statistics --- 00:10:07.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.099 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=152756 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 152756 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 152756 ']' 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.099 00:37:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.099 [2024-12-07 00:37:22.795330] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:07.099 [2024-12-07 00:37:22.795431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.100 [2024-12-07 00:37:22.872590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.100 [2024-12-07 00:37:22.920554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.100 [2024-12-07 00:37:22.920626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.100 [2024-12-07 00:37:22.920640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.100 [2024-12-07 00:37:22.920651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.100 [2024-12-07 00:37:22.920660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.100 [2024-12-07 00:37:22.921330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 [2024-12-07 00:37:23.073102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 Malloc0 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 [2024-12-07 00:37:23.121054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=152776 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 152776 /var/tmp/bdevperf.sock 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 152776 ']' 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.100 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.100 [2024-12-07 00:37:23.168514] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:07.100 [2024-12-07 00:37:23.168588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152776 ] 00:10:07.398 [2024-12-07 00:37:23.239406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.398 [2024-12-07 00:37:23.286084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:07.398 NVMe0n1 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.398 00:37:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.690 Running I/O for 10 seconds... 00:10:09.710 8118.00 IOPS, 31.71 MiB/s [2024-12-06T23:37:26.793Z] 8185.00 IOPS, 31.97 MiB/s [2024-12-06T23:37:27.725Z] 8191.33 IOPS, 32.00 MiB/s [2024-12-06T23:37:28.660Z] 8193.75 IOPS, 32.01 MiB/s [2024-12-06T23:37:30.036Z] 8343.20 IOPS, 32.59 MiB/s [2024-12-06T23:37:30.978Z] 8356.00 IOPS, 32.64 MiB/s [2024-12-06T23:37:31.914Z] 8334.71 IOPS, 32.56 MiB/s [2024-12-06T23:37:32.848Z] 8369.25 IOPS, 32.69 MiB/s [2024-12-06T23:37:33.779Z] 8404.33 IOPS, 32.83 MiB/s [2024-12-06T23:37:33.779Z] 8394.00 IOPS, 32.79 MiB/s 00:10:17.628 Latency(us) 00:10:17.628 [2024-12-06T23:37:33.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.628 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:17.628 Verification LBA range: start 0x0 length 0x4000 00:10:17.628 NVMe0n1 : 10.07 8435.65 32.95 0.00 0.00 120899.68 14369.37 72235.24 00:10:17.628 [2024-12-06T23:37:33.779Z] =================================================================================================================== 00:10:17.628 [2024-12-06T23:37:33.779Z] Total : 8435.65 32.95 0.00 0.00 120899.68 14369.37 72235.24 00:10:17.628 { 00:10:17.628 "results": [ 00:10:17.628 { 00:10:17.628 "job": "NVMe0n1", 00:10:17.628 "core_mask": "0x1", 00:10:17.628 "workload": "verify", 00:10:17.628 "status": "finished", 00:10:17.628 "verify_range": { 00:10:17.628 "start": 0, 00:10:17.628 "length": 16384 00:10:17.628 }, 00:10:17.628 "queue_depth": 1024, 00:10:17.628 "io_size": 4096, 00:10:17.628 "runtime": 10.069287, 00:10:17.628 "iops": 8435.65189868955, 00:10:17.628 "mibps": 32.951765229256054, 00:10:17.628 "io_failed": 0, 00:10:17.628 "io_timeout": 0, 00:10:17.628 "avg_latency_us": 120899.68195168149, 00:10:17.628 "min_latency_us": 14369.374814814815, 00:10:17.628 "max_latency_us": 72235.23555555556 00:10:17.628 } 00:10:17.628 ], 00:10:17.628 "core_count": 1 00:10:17.628 } 00:10:17.628 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 152776 00:10:17.628 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 152776 ']' 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 152776 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152776 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152776' 00:10:17.629 killing process with pid 152776 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 152776 00:10:17.629 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.629 00:10:17.629 Latency(us) 00:10:17.629 [2024-12-06T23:37:33.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.629 [2024-12-06T23:37:33.780Z] =================================================================================================================== 00:10:17.629 [2024-12-06T23:37:33.780Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.629 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 152776 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.886 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.886 rmmod nvme_tcp 00:10:17.886 rmmod nvme_fabrics 00:10:17.886 rmmod nvme_keyring 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 152756 ']' 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 152756 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 152756 ']' 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 152756 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.887 00:37:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152756 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152756' 00:10:18.146 killing process with pid 152756 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 152756 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 152756 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.146 00:37:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.686 00:10:20.686 real 0m15.959s 00:10:20.686 user 0m21.487s 00:10:20.686 sys 0m3.523s 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.686 ************************************ 00:10:20.686 END TEST nvmf_queue_depth 00:10:20.686 ************************************ 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.686 ************************************ 00:10:20.686 START TEST nvmf_target_multipath 00:10:20.686 ************************************ 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.686 * Looking for test storage... 00:10:20.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.686 --rc genhtml_branch_coverage=1 00:10:20.686 --rc genhtml_function_coverage=1 00:10:20.686 --rc genhtml_legend=1 00:10:20.686 --rc geninfo_all_blocks=1 00:10:20.686 --rc geninfo_unexecuted_blocks=1 00:10:20.686 00:10:20.686 ' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.686 --rc genhtml_branch_coverage=1 00:10:20.686 --rc genhtml_function_coverage=1 00:10:20.686 --rc genhtml_legend=1 00:10:20.686 --rc geninfo_all_blocks=1 00:10:20.686 --rc geninfo_unexecuted_blocks=1 00:10:20.686 00:10:20.686 ' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.686 --rc genhtml_branch_coverage=1 00:10:20.686 --rc genhtml_function_coverage=1 00:10:20.686 --rc genhtml_legend=1 00:10:20.686 --rc geninfo_all_blocks=1 00:10:20.686 --rc geninfo_unexecuted_blocks=1 00:10:20.686 00:10:20.686 ' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.686 --rc genhtml_branch_coverage=1 00:10:20.686 --rc genhtml_function_coverage=1 00:10:20.686 --rc genhtml_legend=1 00:10:20.686 --rc geninfo_all_blocks=1 00:10:20.686 --rc geninfo_unexecuted_blocks=1 00:10:20.686 00:10:20.686 ' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.686 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.687 00:37:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.594 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:22.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:22.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:22.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:22.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:10:22.595 00:10:22.595 --- 10.0.0.2 ping statistics --- 00:10:22.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.595 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:22.595 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:22.856 00:10:22.856 --- 10.0.0.1 ping statistics --- 00:10:22.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.856 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:22.856 only one NIC for nvmf test 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.856 rmmod nvme_tcp 00:10:22.856 rmmod nvme_fabrics 00:10:22.856 rmmod nvme_keyring 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.856 00:37:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.764 00:10:24.764 real 0m4.541s 00:10:24.764 user 0m0.891s 00:10:24.764 sys 0m1.663s 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.764 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:24.764 ************************************ 00:10:24.764 END TEST nvmf_target_multipath 00:10:24.764 ************************************ 00:10:25.024 00:37:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:25.024 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.024 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.024 00:37:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.024 ************************************ 00:10:25.024 START TEST nvmf_zcopy 00:10:25.024 ************************************ 00:10:25.024 00:37:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:25.024 * Looking for test storage... 00:10:25.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.024 --rc genhtml_branch_coverage=1 00:10:25.024 --rc genhtml_function_coverage=1 00:10:25.024 --rc genhtml_legend=1 00:10:25.024 --rc geninfo_all_blocks=1 00:10:25.024 --rc geninfo_unexecuted_blocks=1 00:10:25.024 00:10:25.024 ' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.024 --rc genhtml_branch_coverage=1 00:10:25.024 --rc genhtml_function_coverage=1 00:10:25.024 --rc genhtml_legend=1 00:10:25.024 --rc geninfo_all_blocks=1 00:10:25.024 --rc geninfo_unexecuted_blocks=1 00:10:25.024 00:10:25.024 ' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.024 --rc genhtml_branch_coverage=1 00:10:25.024 --rc genhtml_function_coverage=1 00:10:25.024 --rc genhtml_legend=1 00:10:25.024 --rc geninfo_all_blocks=1 00:10:25.024 --rc geninfo_unexecuted_blocks=1 00:10:25.024 00:10:25.024 ' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.024 --rc genhtml_branch_coverage=1 00:10:25.024 --rc genhtml_function_coverage=1 00:10:25.024 --rc genhtml_legend=1 00:10:25.024 --rc geninfo_all_blocks=1 00:10:25.024 --rc geninfo_unexecuted_blocks=1 00:10:25.024 00:10:25.024 ' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.024 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.025 00:37:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:27.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:27.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:27.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.564 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:27.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:10:27.565 00:10:27.565 --- 10.0.0.2 ping statistics --- 00:10:27.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.565 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:10:27.565 00:10:27.565 --- 10.0.0.1 ping statistics --- 00:10:27.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.565 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=158007 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 158007 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 158007 ']' 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.565 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.565 [2024-12-07 00:37:43.643371] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:27.565 [2024-12-07 00:37:43.643457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.825 [2024-12-07 00:37:43.720724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.825 [2024-12-07 00:37:43.768939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.825 [2024-12-07 00:37:43.768992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.825 [2024-12-07 00:37:43.769028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.825 [2024-12-07 00:37:43.769040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.825 [2024-12-07 00:37:43.769050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.825 [2024-12-07 00:37:43.769624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 [2024-12-07 00:37:43.917871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 [2024-12-07 00:37:43.934102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 malloc0 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:27.825 { 00:10:27.825 "params": { 00:10:27.825 "name": "Nvme$subsystem", 00:10:27.825 "trtype": "$TEST_TRANSPORT", 00:10:27.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.825 "adrfam": "ipv4", 00:10:27.825 "trsvcid": "$NVMF_PORT", 00:10:27.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.825 "hdgst": ${hdgst:-false}, 00:10:27.825 "ddgst": ${ddgst:-false} 00:10:27.825 }, 00:10:27.825 "method": "bdev_nvme_attach_controller" 00:10:27.825 } 00:10:27.825 EOF 00:10:27.825 )") 00:10:27.825 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:28.084 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:28.084 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:28.084 00:37:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.084 "params": { 00:10:28.084 "name": "Nvme1", 00:10:28.084 "trtype": "tcp", 00:10:28.084 "traddr": "10.0.0.2", 00:10:28.084 "adrfam": "ipv4", 00:10:28.084 "trsvcid": "4420", 00:10:28.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.084 "hdgst": false, 00:10:28.084 "ddgst": false 00:10:28.084 }, 00:10:28.084 "method": "bdev_nvme_attach_controller" 00:10:28.084 }' 00:10:28.084 [2024-12-07 00:37:44.013626] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:28.084 [2024-12-07 00:37:44.013707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158057 ] 00:10:28.084 [2024-12-07 00:37:44.081148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.084 [2024-12-07 00:37:44.128712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.342 Running I/O for 10 seconds... 00:10:30.649 5750.00 IOPS, 44.92 MiB/s [2024-12-06T23:37:47.734Z] 5823.00 IOPS, 45.49 MiB/s [2024-12-06T23:37:48.668Z] 5839.67 IOPS, 45.62 MiB/s [2024-12-06T23:37:49.604Z] 5849.00 IOPS, 45.70 MiB/s [2024-12-06T23:37:50.538Z] 5852.20 IOPS, 45.72 MiB/s [2024-12-06T23:37:51.472Z] 5853.83 IOPS, 45.73 MiB/s [2024-12-06T23:37:52.405Z] 5852.43 IOPS, 45.72 MiB/s [2024-12-06T23:37:53.780Z] 5857.12 IOPS, 45.76 MiB/s [2024-12-06T23:37:54.714Z] 5858.22 IOPS, 45.77 MiB/s [2024-12-06T23:37:54.714Z] 5858.80 IOPS, 45.77 MiB/s 00:10:38.563 Latency(us) 00:10:38.563 [2024-12-06T23:37:54.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:38.563 Verification LBA range: start 0x0 length 0x1000 00:10:38.563 Nvme1n1 : 10.02 5862.36 45.80 0.00 0.00 21772.56 3446.71 30292.20 00:10:38.563 [2024-12-06T23:37:54.714Z] =================================================================================================================== 00:10:38.563 [2024-12-06T23:37:54.714Z] Total : 5862.36 45.80 0.00 0.00 21772.56 3446.71 30292.20 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=159345 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:38.563 { 00:10:38.563 "params": { 00:10:38.563 "name": "Nvme$subsystem", 00:10:38.563 "trtype": "$TEST_TRANSPORT", 00:10:38.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.563 "adrfam": "ipv4", 00:10:38.563 "trsvcid": "$NVMF_PORT", 00:10:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.563 "hdgst": ${hdgst:-false}, 00:10:38.563 "ddgst": ${ddgst:-false} 00:10:38.563 }, 00:10:38.563 "method": "bdev_nvme_attach_controller" 00:10:38.563 } 00:10:38.563 EOF 00:10:38.563 )") 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:38.563 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:38.563 [2024-12-07 00:37:54.604753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.604790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:38.564 00:37:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:38.564 "params": { 00:10:38.564 "name": "Nvme1", 00:10:38.564 "trtype": "tcp", 00:10:38.564 "traddr": "10.0.0.2", 00:10:38.564 "adrfam": "ipv4", 00:10:38.564 "trsvcid": "4420", 00:10:38.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.564 "hdgst": false, 00:10:38.564 "ddgst": false 00:10:38.564 }, 00:10:38.564 "method": "bdev_nvme_attach_controller" 00:10:38.564 }' 00:10:38.564 [2024-12-07 00:37:54.612709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.612730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.620730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.620751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.628750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.628770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.636777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.636797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.642305] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:38.564 [2024-12-07 00:37:54.642395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159345 ] 00:10:38.564 [2024-12-07 00:37:54.644804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.644823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.652813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.652833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.660832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.660851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.668855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.668874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.676880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.676900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.684898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.684917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.692919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.692938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.700948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.700968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.564 [2024-12-07 00:37:54.708974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.564 [2024-12-07 00:37:54.709003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.714224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.822 [2024-12-07 00:37:54.717019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.717056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.725073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.725110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.733063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.733088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.741068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.741089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.749085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.749105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.757107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.757127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.760955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.822 [2024-12-07 00:37:54.765140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.765160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.773163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.773183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.781216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.781251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.789234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.789270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.797287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.797323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.805302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.805338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.813324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.813359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.822 [2024-12-07 00:37:54.821316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.822 [2024-12-07 00:37:54.821337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.829368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.829402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.837386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.837421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.845423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.845457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.853422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.853443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.861437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.861466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.869451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.869476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.877471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.877493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.885496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.885518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.893518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.893539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.901540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.901561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.909557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.909578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.917581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.917600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.925605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.925624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.933627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.933647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.941649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.941668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.949675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.949696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.957694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.957714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.823 [2024-12-07 00:37:54.965715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.823 [2024-12-07 00:37:54.965734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:54.973737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:54.973757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:54.981759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:54.981778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:54.989802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:54.989823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:54.997805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:54.997824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:55.005826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:55.005846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:55.013850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:55.013870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:55.021873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.103 [2024-12-07 00:37:55.021892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.103 [2024-12-07 00:37:55.029897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.029916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.037918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.037937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.045954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.046004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.054014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.054038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 Running I/O for 5 seconds... 00:10:39.104 [2024-12-07 00:37:55.068936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.068965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.079824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.079852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.090605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.090631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.101452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.101480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.115033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.115061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.125589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.125615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.136606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.136633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.149234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.149261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.159481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.159508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.170455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.170483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.183426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.183453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.104 [2024-12-07 00:37:55.193659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.104 [2024-12-07 00:37:55.193685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.105 [2024-12-07 00:37:55.204220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.105 [2024-12-07 00:37:55.204247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.105 [2024-12-07 00:37:55.214918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.105 [2024-12-07 00:37:55.214944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.105 [2024-12-07 00:37:55.225525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.105 [2024-12-07 00:37:55.225552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.105 [2024-12-07 00:37:55.236318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.105 [2024-12-07 00:37:55.236345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.247264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.247307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.259541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.259566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.269728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.269755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.280511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.280537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.293019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.293046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.302938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.302964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.314109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.314137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.326532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.326559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.336635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.336662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.347800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.347827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.358867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.358895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.369467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.369495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.381882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.381910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.391989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.392028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.402732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.402759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.366 [2024-12-07 00:37:55.415278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.366 [2024-12-07 00:37:55.415306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.427135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.427162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.436314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.436341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.448308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.448335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.459215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.459242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.469649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.469676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.480685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.480711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.493245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.493273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.502760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.502786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.367 [2024-12-07 00:37:55.513458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.367 [2024-12-07 00:37:55.513485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.524244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.524271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.536431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.536459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.546314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.546341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.556848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.556876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.569160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.569187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.579312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.579340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.590494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.590520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.604095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.604122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.614755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.614782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.625501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.625540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.636240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.636267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.647216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.647245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.660237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.660264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.670212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.670239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.680764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.680790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.691451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.691478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.701971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.702021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.712833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.712859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.725285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.725326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.735388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.735414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.745954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.746005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.756550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.756576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.624 [2024-12-07 00:37:55.767394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.624 [2024-12-07 00:37:55.767422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.779742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.779784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.789596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.789622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.800723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.800749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.813459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.813486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.825154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.825181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.834158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.834193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.845597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.845623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.882 [2024-12-07 00:37:55.856306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.882 [2024-12-07 00:37:55.856333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.867122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.867149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.878165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.878192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.889150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.889177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.901666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.901692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.911633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.911659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.922058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.922085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.932621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.932646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.943279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.943321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.954186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.954213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.966915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.966942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.977318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.977359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.988073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.988101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:55.998768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:55.998795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:56.009398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:56.009424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:56.019715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:56.019741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.883 [2024-12-07 00:37:56.030217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.883 [2024-12-07 00:37:56.030245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.040815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.040849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.051584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.051611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 11840.00 IOPS, 92.50 MiB/s [2024-12-06T23:37:56.292Z] [2024-12-07 00:37:56.064092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.064119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.074082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.074109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.084802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.084828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.095676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.095701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.106809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.106835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.117745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.117771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.128467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.128494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.140802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.140828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 [2024-12-07 00:37:56.151335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.141 [2024-12-07 00:37:56.151361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.161757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.161783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.172436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.172462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.183408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.183434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.195784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.195810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.206030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.206057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.216504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.216531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.226879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.226905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.237687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.237713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.250169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.250196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.260221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.260248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.270922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.270949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.142 [2024-12-07 00:37:56.283712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.142 [2024-12-07 00:37:56.283738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.293720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.293747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.304507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.304534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.315100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.315127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.325703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.325729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.336967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.337017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.347169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.347197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.357876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.357902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.371036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.371063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.380771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.380798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.391457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.391484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.402700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.402726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.413202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.413229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.424222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.424249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.436776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.436803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.447176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.447203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.457790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.457817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.470451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.470478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.482194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.482221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.491326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.491353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.503225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.503253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.513924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.513950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.524644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.524671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.535667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.535693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.401 [2024-12-07 00:37:56.546711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.401 [2024-12-07 00:37:56.546738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.557641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.557667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.569041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.569078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.579929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.579956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.590722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.590759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.601708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.601734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.612747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.612773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.625436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.625463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.635449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.635475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.645940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.659 [2024-12-07 00:37:56.645966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.659 [2024-12-07 00:37:56.658331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.658357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.668349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.668389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.679082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.679110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.689893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.689919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.702501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.702526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.712713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.712739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.723456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.723483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.734245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.734272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.745403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.745430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.756129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.756156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.766770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.766797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.777330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.777356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.788138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.788166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.660 [2024-12-07 00:37:56.800721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.660 [2024-12-07 00:37:56.800747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.810671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.810697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.821467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.821493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.831927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.831952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.842426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.842452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.853109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.853136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.864428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.864462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.877520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.877547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.887481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.887508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.898139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.898168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.908861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.908887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.919908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.919935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.931697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.931724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.941469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.941496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.951911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.951937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.962555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.962584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.973194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.973221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.983734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.983760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:56.994402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:56.994428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.005347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.005373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.015779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.015805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.026665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.026691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.037410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.037452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.047773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.047799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.918 [2024-12-07 00:37:57.059045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.918 [2024-12-07 00:37:57.059074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.180 11833.50 IOPS, 92.45 MiB/s [2024-12-06T23:37:57.331Z] [2024-12-07 00:37:57.071756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.071790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.082148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.082175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.092788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.092814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.103262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.103303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.114358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.114385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.126655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.126683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.135370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.135397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.146793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.146820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.157536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.157563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.168197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.168225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.179283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.179309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.190040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.190067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.202736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.202762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.212775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.212801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.223312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.223339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.234291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.234318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.246743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.246770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.255838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.255866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.266989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.267024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.277240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.277276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.287658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.287684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.297752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.297778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.308171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.308198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.318280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.318307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.181 [2024-12-07 00:37:57.328661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.181 [2024-12-07 00:37:57.328688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.339015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.339042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.349271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.349298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.360418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.360445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.374237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.374264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.384617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.384643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.395035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.395062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.405648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.405675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.416066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.416092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.426564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.426590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.437316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.437342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.447817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.447843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.458774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.458800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.469413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.469439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.482799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.482833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.493184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.493211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.503751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.503777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.514928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.514954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.525775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.525801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.536138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.536164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.547165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.547191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.559753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.559779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.569267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.569308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.440 [2024-12-07 00:37:57.580633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.440 [2024-12-07 00:37:57.580659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.591133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.591160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.601851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.601877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.614760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.614786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.625387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.625413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.635961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.636013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.646749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.646775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.657814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.657842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.671366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.671392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.682115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.682144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.693208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.693235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.705868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.705894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.715773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.715800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.727072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.727115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.739899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.739926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.750068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.750095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.760684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.760710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.771375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.771402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.784269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.784296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.796060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.796087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.805432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.805459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.816928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.816954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.830386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.830412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.699 [2024-12-07 00:37:57.841001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.699 [2024-12-07 00:37:57.841028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.851641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.851668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.862426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.862452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.873415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.873441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.884250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.884277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.895026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.895052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.907665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.907692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.918150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.918177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.929198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.929226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.939970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.940022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.951009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.951058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.963845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.963872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.974151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.974177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.984798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.984825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:57.995326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:57.995354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:58.006427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:58.006454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:58.017675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:58.017702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:58.028567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.958 [2024-12-07 00:37:58.028594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.958 [2024-12-07 00:37:58.039560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.039587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 [2024-12-07 00:37:58.050811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.050837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 [2024-12-07 00:37:58.061202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.061229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 11835.33 IOPS, 92.46 MiB/s [2024-12-06T23:37:58.110Z] [2024-12-07 00:37:58.071897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.071923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 [2024-12-07 00:37:58.082594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.082621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 [2024-12-07 00:37:58.093122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.093148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.959 [2024-12-07 00:37:58.103744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.959 [2024-12-07 00:37:58.103779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.217 [2024-12-07 00:37:58.114289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.217 [2024-12-07 00:37:58.114316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.217 [2024-12-07 00:37:58.127134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.217 [2024-12-07 00:37:58.127162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.217 [2024-12-07 00:37:58.137355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.217 [2024-12-07 00:37:58.137382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.148222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.148250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.162014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.162041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.172113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.172140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.182210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.182237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.192534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.192561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.203626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.203652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.216510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.216536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.226842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.226868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.237850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.237876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.250363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.250390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.260471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.260497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.271047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.271075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.281967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.282017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.294637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.294663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.304846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.304872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.315378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.315411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.326038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.326065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.336871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.336897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.349712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.349738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.218 [2024-12-07 00:37:58.359592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.218 [2024-12-07 00:37:58.359619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.370414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.370441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.381219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.381246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.391725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.391752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.402705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.402733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.413547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.413574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.426364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.426391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.436428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.436455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.446754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.446781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.457005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.457032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.467547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.467575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.480194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.480221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.489956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.489983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.500471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.500498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.511025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.511052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.521275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.521308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.532142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.532169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.542849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.542877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.553319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.553347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.565932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.565959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.576264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.576291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.586725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.586751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.597384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.597411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.608172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.608199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.477 [2024-12-07 00:37:58.618801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.477 [2024-12-07 00:37:58.618827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.629530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.629558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.640649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.640675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.651759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.651785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.664660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.664687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.675293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.675334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.685894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.685920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.696546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.696572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.707402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.707428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.720200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.720228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.730420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.730458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.741295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.741322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.751837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.751863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.762632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.762658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.773484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.773512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.784257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.784284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.795053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.795081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.805987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.806036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.819257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.819284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.830109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.830136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.841444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.841470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.852411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.852438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.862892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.862919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.736 [2024-12-07 00:37:58.873734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.736 [2024-12-07 00:37:58.873761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.886028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.886055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.895487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.895515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.906408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.906435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.919578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.919605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.929821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.929847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.940237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.940263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.951038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.951074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.962157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.962184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.972950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.972990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.986562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.986588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:58.996972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:58.997006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.007919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.007945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.021856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.021883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.032600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.032625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.043395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.043421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.054222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.054249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.065238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.065265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 11841.00 IOPS, 92.51 MiB/s [2024-12-06T23:37:59.161Z] [2024-12-07 00:37:59.076098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.076125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.086945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.086972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.099473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.099499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.109762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.109788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.120281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.120323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.131052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.131079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.141862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.141887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.010 [2024-12-07 00:37:59.152673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.010 [2024-12-07 00:37:59.152700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.165459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.165485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.175692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.175718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.186818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.186843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.199625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.199652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.209612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.209639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.220361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.220389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.231397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.231423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.244085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.244112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.254732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.254760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.264955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.264982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.275522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.275549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.285619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.285645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.295820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.295847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.306170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.306198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.317439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.317466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.328180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.328208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.338607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.338633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.349732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.349759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.362279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.362314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.372025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.372053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.382749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.382777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.395373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.395399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.405450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.405489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.269 [2024-12-07 00:37:59.416175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.269 [2024-12-07 00:37:59.416203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.428592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.428618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.438409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.438436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.449505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.449531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.462563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.462589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.472892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.472918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.483459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.483485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.494486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.494512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.507381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.507407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.517513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.517539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.528079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.528106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.538735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.538762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.549442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.549468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.560517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.560551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.571373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.571399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.581848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.581875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.592088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.592115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.602303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.602330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.613216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.613243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.625844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.625871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.636388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.636414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.646964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.647016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.657509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.657535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.528 [2024-12-07 00:37:59.668077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.528 [2024-12-07 00:37:59.668104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.678417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.678458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.688859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.688886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.699560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.699587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.712192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.712218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.724131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.724158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.733421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.733448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.744948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.744974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.755699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.755724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.766644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.766679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.780100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.780127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.790536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.790562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.801088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.801115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.812076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.812103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.822935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.822961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.842786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.842815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.787 [2024-12-07 00:37:59.853845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.787 [2024-12-07 00:37:59.853872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.864744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.864786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.875311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.875337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.885921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.885961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.896582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.896608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.910211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.910238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.920734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.920761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.788 [2024-12-07 00:37:59.931690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.788 [2024-12-07 00:37:59.931717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.944271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.944300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.954283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.954310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.964904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.964931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.976066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.976094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.986795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.986828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:37:59.999546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:37:59.999572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.009865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.009892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.020212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.020242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.031077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.031108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.042062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.042091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.053603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.053631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.064923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.064950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 11837.40 IOPS, 92.48 MiB/s [2024-12-06T23:38:00.197Z] [2024-12-07 00:38:00.075897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.075924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 00:10:44.046 Latency(us) 00:10:44.046 [2024-12-06T23:38:00.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.046 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:44.046 Nvme1n1 : 5.01 11838.31 92.49 0.00 0.00 10796.61 3325.35 18544.26 00:10:44.046 [2024-12-06T23:38:00.197Z] =================================================================================================================== 00:10:44.046 [2024-12-06T23:38:00.197Z] Total : 11838.31 92.49 0.00 0.00 10796.61 3325.35 18544.26 00:10:44.046 [2024-12-07 00:38:00.083537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.083564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.091618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.046 [2024-12-07 00:38:00.091643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.046 [2024-12-07 00:38:00.099598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.099633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.107648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.107699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.115676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.115724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.123685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.123731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.131708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.131753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.139723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.139771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.147755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.147803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.155770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.155816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.163793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.163840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.171816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.171864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.179837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.179884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.047 [2024-12-07 00:38:00.187861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.047 [2024-12-07 00:38:00.187911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.195883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.195930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.203901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.203947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.211921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.211965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.219943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.219988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.227935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.227985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.235950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.235990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.244016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.244058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.252039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.252084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.260065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.260112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.268063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.268089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.276080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.276103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 [2024-12-07 00:38:00.284093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.306 [2024-12-07 00:38:00.284117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (159345) - No such process 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 159345 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.306 delay0 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.306 00:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:44.306 [2024-12-07 00:38:00.412920] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:50.863 Initializing NVMe Controllers 00:10:50.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:50.863 Initialization complete. Launching workers. 00:10:50.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 365 00:10:50.863 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 652, failed to submit 33 00:10:50.863 success 518, unsuccessful 134, failed 0 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.863 rmmod nvme_tcp 00:10:50.863 rmmod nvme_fabrics 00:10:50.863 rmmod nvme_keyring 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 158007 ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 158007 ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158007' 00:10:50.863 killing process with pid 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 158007 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.863 00:38:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.408 00:38:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.408 00:10:53.408 real 0m28.049s 00:10:53.408 user 0m41.807s 00:10:53.408 sys 0m7.705s 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.408 ************************************ 00:10:53.408 END TEST nvmf_zcopy 00:10:53.408 ************************************ 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.408 ************************************ 00:10:53.408 START TEST nvmf_nmic 00:10:53.408 ************************************ 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:53.408 * Looking for test storage... 00:10:53.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:53.408 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.409 --rc genhtml_branch_coverage=1 00:10:53.409 --rc genhtml_function_coverage=1 00:10:53.409 --rc genhtml_legend=1 00:10:53.409 --rc geninfo_all_blocks=1 00:10:53.409 --rc geninfo_unexecuted_blocks=1 00:10:53.409 00:10:53.409 ' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.409 --rc genhtml_branch_coverage=1 00:10:53.409 --rc genhtml_function_coverage=1 00:10:53.409 --rc genhtml_legend=1 00:10:53.409 --rc geninfo_all_blocks=1 00:10:53.409 --rc geninfo_unexecuted_blocks=1 00:10:53.409 00:10:53.409 ' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.409 --rc genhtml_branch_coverage=1 00:10:53.409 --rc genhtml_function_coverage=1 00:10:53.409 --rc genhtml_legend=1 00:10:53.409 --rc geninfo_all_blocks=1 00:10:53.409 --rc geninfo_unexecuted_blocks=1 00:10:53.409 00:10:53.409 ' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.409 --rc genhtml_branch_coverage=1 00:10:53.409 --rc genhtml_function_coverage=1 00:10:53.409 --rc genhtml_legend=1 00:10:53.409 --rc geninfo_all_blocks=1 00:10:53.409 --rc geninfo_unexecuted_blocks=1 00:10:53.409 00:10:53.409 ' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.409 00:38:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.361 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.362 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:10:55.621 00:10:55.621 --- 10.0.0.2 ping statistics --- 00:10:55.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.621 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:10:55.621 00:10:55.621 --- 10.0.0.1 ping statistics --- 00:10:55.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.621 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=162748 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 162748 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 162748 ']' 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.621 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.621 [2024-12-07 00:38:11.669917] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:55.621 [2024-12-07 00:38:11.670015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.621 [2024-12-07 00:38:11.741091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.879 [2024-12-07 00:38:11.786480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.879 [2024-12-07 00:38:11.786534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.879 [2024-12-07 00:38:11.786561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.879 [2024-12-07 00:38:11.786572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.879 [2024-12-07 00:38:11.786581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.879 [2024-12-07 00:38:11.788161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.879 [2024-12-07 00:38:11.788187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.879 [2024-12-07 00:38:11.788245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.879 [2024-12-07 00:38:11.788248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.879 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 [2024-12-07 00:38:11.929457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 Malloc0 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 [2024-12-07 00:38:12.000779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:55.880 test case1: single bdev can't be used in multiple subsystems 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.880 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:55.880 [2024-12-07 00:38:12.024652] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:55.880 [2024-12-07 00:38:12.024682] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:55.880 [2024-12-07 00:38:12.024711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.880 request: 00:10:55.880 { 00:10:55.880 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:55.880 "namespace": { 00:10:55.880 "bdev_name": "Malloc0", 00:10:56.137 "no_auto_visible": false, 00:10:56.137 "hide_metadata": false 00:10:56.137 }, 00:10:56.137 "method": "nvmf_subsystem_add_ns", 00:10:56.137 "req_id": 1 00:10:56.137 } 00:10:56.137 Got JSON-RPC error response 00:10:56.137 response: 00:10:56.137 { 00:10:56.137 "code": -32602, 00:10:56.137 "message": "Invalid parameters" 00:10:56.137 } 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:56.137 Adding namespace failed - expected result. 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:56.137 test case2: host connect to nvmf target in multiple paths 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 [2024-12-07 00:38:12.032774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.137 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.703 00:38:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:57.268 00:38:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.268 00:38:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:57.268 00:38:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.268 00:38:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:57.268 00:38:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:59.797 00:38:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.797 [global] 00:10:59.797 thread=1 00:10:59.797 invalidate=1 00:10:59.797 rw=write 00:10:59.797 time_based=1 00:10:59.797 runtime=1 00:10:59.797 ioengine=libaio 00:10:59.797 direct=1 00:10:59.797 bs=4096 00:10:59.797 iodepth=1 00:10:59.797 norandommap=0 00:10:59.797 numjobs=1 00:10:59.797 00:10:59.797 verify_dump=1 00:10:59.797 verify_backlog=512 00:10:59.797 verify_state_save=0 00:10:59.797 do_verify=1 00:10:59.797 verify=crc32c-intel 00:10:59.797 [job0] 00:10:59.797 filename=/dev/nvme0n1 00:10:59.797 Could not set queue depth (nvme0n1) 00:10:59.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.797 fio-3.35 00:10:59.797 Starting 1 thread 00:11:01.173 00:11:01.173 job0: (groupid=0, jobs=1): err= 0: pid=163269: Sat Dec 7 00:38:17 2024 00:11:01.173 read: IOPS=808, BW=3235KiB/s (3313kB/s)(3264KiB/1009msec) 00:11:01.173 slat (nsec): min=4251, max=64972, avg=9140.88, stdev=7183.31 00:11:01.173 clat (usec): min=157, max=42073, avg=1023.99, stdev=5764.28 00:11:01.173 lat (usec): min=161, max=42089, avg=1033.13, stdev=5767.08 00:11:01.173 clat percentiles (usec): 00:11:01.173 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:11:01.173 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:11:01.173 | 70.00th=[ 212], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 285], 00:11:01.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.173 | 99.99th=[42206] 00:11:01.173 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:01.173 slat (nsec): min=5697, max=60680, avg=9940.12, stdev=6019.18 00:11:01.173 clat (usec): min=119, max=1599, avg=147.30, stdev=47.52 00:11:01.173 lat (usec): min=125, max=1605, avg=157.24, stdev=48.49 00:11:01.173 clat percentiles (usec): 00:11:01.173 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 137], 00:11:01.173 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:11:01.173 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 165], 00:11:01.173 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 249], 99.95th=[ 1598], 00:11:01.173 | 99.99th=[ 1598] 00:11:01.173 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.173 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.173 lat (usec) : 250=92.72%, 500=6.36% 00:11:01.173 lat (msec) : 2=0.05%, 50=0.87% 00:11:01.173 cpu : usr=0.89%, sys=1.79%, ctx=1840, majf=0, minf=1 00:11:01.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.173 issued rwts: total=816,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.173 00:11:01.173 Run status group 0 (all jobs): 00:11:01.173 READ: bw=3235KiB/s (3313kB/s), 3235KiB/s-3235KiB/s (3313kB/s-3313kB/s), io=3264KiB (3342kB), run=1009-1009msec 00:11:01.173 WRITE: bw=4059KiB/s (4157kB/s), 4059KiB/s-4059KiB/s (4157kB/s-4157kB/s), io=4096KiB (4194kB), run=1009-1009msec 00:11:01.173 00:11:01.173 Disk stats (read/write): 00:11:01.173 nvme0n1: ios=863/1024, merge=0/0, ticks=730/139, in_queue=869, util=91.48% 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.173 rmmod nvme_tcp 00:11:01.173 rmmod nvme_fabrics 00:11:01.173 rmmod nvme_keyring 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:01.173 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 162748 ']' 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 162748 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 162748 ']' 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 162748 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162748 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162748' 00:11:01.174 killing process with pid 162748 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 162748 00:11:01.174 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 162748 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.434 00:38:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.978 00:11:03.978 real 0m10.478s 00:11:03.978 user 0m23.696s 00:11:03.978 sys 0m2.781s 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.978 ************************************ 00:11:03.978 END TEST nvmf_nmic 00:11:03.978 ************************************ 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.978 ************************************ 00:11:03.978 START TEST nvmf_fio_target 00:11:03.978 ************************************ 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.978 * Looking for test storage... 00:11:03.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.978 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.979 --rc genhtml_branch_coverage=1 00:11:03.979 --rc genhtml_function_coverage=1 00:11:03.979 --rc genhtml_legend=1 00:11:03.979 --rc geninfo_all_blocks=1 00:11:03.979 --rc geninfo_unexecuted_blocks=1 00:11:03.979 00:11:03.979 ' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.979 --rc genhtml_branch_coverage=1 00:11:03.979 --rc genhtml_function_coverage=1 00:11:03.979 --rc genhtml_legend=1 00:11:03.979 --rc geninfo_all_blocks=1 00:11:03.979 --rc geninfo_unexecuted_blocks=1 00:11:03.979 00:11:03.979 ' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.979 --rc genhtml_branch_coverage=1 00:11:03.979 --rc genhtml_function_coverage=1 00:11:03.979 --rc genhtml_legend=1 00:11:03.979 --rc geninfo_all_blocks=1 00:11:03.979 --rc geninfo_unexecuted_blocks=1 00:11:03.979 00:11:03.979 ' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.979 --rc genhtml_branch_coverage=1 00:11:03.979 --rc genhtml_function_coverage=1 00:11:03.979 --rc genhtml_legend=1 00:11:03.979 --rc geninfo_all_blocks=1 00:11:03.979 --rc geninfo_unexecuted_blocks=1 00:11:03.979 00:11:03.979 ' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.979 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.980 00:38:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:05.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:05.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:05.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:05.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.884 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:11:05.885 00:11:05.885 --- 10.0.0.2 ping statistics --- 00:11:05.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.885 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:11:05.885 00:11:05.885 --- 10.0.0.1 ping statistics --- 00:11:05.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.885 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=165473 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 165473 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 165473 ']' 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.885 00:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.885 [2024-12-07 00:38:21.955111] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:05.885 [2024-12-07 00:38:21.955202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.885 [2024-12-07 00:38:22.026888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.143 [2024-12-07 00:38:22.071651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.143 [2024-12-07 00:38:22.071706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.143 [2024-12-07 00:38:22.071741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.143 [2024-12-07 00:38:22.071752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.143 [2024-12-07 00:38:22.071762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.143 [2024-12-07 00:38:22.073445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.143 [2024-12-07 00:38:22.073511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.143 [2024-12-07 00:38:22.073621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.143 [2024-12-07 00:38:22.073629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.143 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:06.402 [2024-12-07 00:38:22.471351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.402 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.660 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:06.660 00:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.227 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:07.227 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.227 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:07.227 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.791 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:07.791 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:07.791 00:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.357 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:08.357 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.357 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:08.357 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.923 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:08.923 00:38:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:08.923 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.187 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.187 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.443 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.443 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:10.007 00:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.007 [2024-12-07 00:38:26.115076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.007 00:38:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:10.573 00:38:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:10.573 00:38:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:11.506 00:38:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:13.406 00:38:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:13.406 [global] 00:11:13.406 thread=1 00:11:13.406 invalidate=1 00:11:13.406 rw=write 00:11:13.406 time_based=1 00:11:13.406 runtime=1 00:11:13.406 ioengine=libaio 00:11:13.406 direct=1 00:11:13.406 bs=4096 00:11:13.406 iodepth=1 00:11:13.406 norandommap=0 00:11:13.406 numjobs=1 00:11:13.406 00:11:13.406 verify_dump=1 00:11:13.406 verify_backlog=512 00:11:13.406 verify_state_save=0 00:11:13.406 do_verify=1 00:11:13.406 verify=crc32c-intel 00:11:13.406 [job0] 00:11:13.406 filename=/dev/nvme0n1 00:11:13.406 [job1] 00:11:13.406 filename=/dev/nvme0n2 00:11:13.406 [job2] 00:11:13.406 filename=/dev/nvme0n3 00:11:13.406 [job3] 00:11:13.406 filename=/dev/nvme0n4 00:11:13.406 Could not set queue depth (nvme0n1) 00:11:13.406 Could not set queue depth (nvme0n2) 00:11:13.406 Could not set queue depth (nvme0n3) 00:11:13.406 Could not set queue depth (nvme0n4) 00:11:13.662 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.662 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.662 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.662 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.662 fio-3.35 00:11:13.662 Starting 4 threads 00:11:15.031 00:11:15.031 job0: (groupid=0, jobs=1): err= 0: pid=166509: Sat Dec 7 00:38:30 2024 00:11:15.031 read: IOPS=624, BW=2498KiB/s (2557kB/s)(2520KiB/1009msec) 00:11:15.031 slat (nsec): min=7112, max=56387, avg=13882.14, stdev=6358.49 00:11:15.031 clat (usec): min=194, max=41386, avg=1206.31, stdev=6183.70 00:11:15.031 lat (usec): min=202, max=41404, avg=1220.19, stdev=6184.88 00:11:15.031 clat percentiles (usec): 00:11:15.031 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:11:15.031 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:11:15.031 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 289], 00:11:15.031 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.031 | 99.99th=[41157] 00:11:15.031 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:15.031 slat (usec): min=7, max=856, avg=19.11, stdev=27.30 00:11:15.031 clat (usec): min=151, max=1266, avg=207.83, stdev=57.13 00:11:15.031 lat (usec): min=160, max=1275, avg=226.94, stdev=62.07 00:11:15.031 clat percentiles (usec): 00:11:15.031 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:11:15.031 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 208], 00:11:15.031 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 255], 00:11:15.031 | 99.00th=[ 297], 99.50th=[ 400], 99.90th=[ 947], 99.95th=[ 1270], 00:11:15.031 | 99.99th=[ 1270] 00:11:15.031 bw ( KiB/s): min= 4096, max= 4096, per=29.29%, avg=4096.00, stdev= 0.00, samples=2 00:11:15.031 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:15.031 lat (usec) : 250=83.74%, 500=15.11%, 1000=0.18% 00:11:15.031 lat (msec) : 2=0.06%, 50=0.91% 00:11:15.031 cpu : usr=1.69%, sys=3.77%, ctx=1657, majf=0, minf=1 00:11:15.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.031 issued rwts: total=630,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.031 job1: (groupid=0, jobs=1): err= 0: pid=166541: Sat Dec 7 00:38:30 2024 00:11:15.031 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:11:15.031 slat (nsec): min=15004, max=36135, avg=27062.55, stdev=8229.88 00:11:15.031 clat (usec): min=40902, max=42081, avg=41637.29, stdev=479.33 00:11:15.031 lat (usec): min=40936, max=42099, avg=41664.35, stdev=476.14 00:11:15.031 clat percentiles (usec): 00:11:15.031 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.031 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:15.031 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:15.031 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.031 | 99.99th=[42206] 00:11:15.032 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:15.032 slat (nsec): min=7346, max=65469, avg=11031.58, stdev=6210.93 00:11:15.032 clat (usec): min=145, max=319, avg=195.85, stdev=31.01 00:11:15.032 lat (usec): min=154, max=327, avg=206.88, stdev=32.71 00:11:15.032 clat percentiles (usec): 00:11:15.032 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:11:15.032 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:11:15.032 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 262], 00:11:15.032 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 322], 00:11:15.032 | 99.99th=[ 322] 00:11:15.032 bw ( KiB/s): min= 4096, max= 4096, per=29.29%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.032 lat (usec) : 250=89.70%, 500=6.18% 00:11:15.032 lat (msec) : 50=4.12% 00:11:15.032 cpu : usr=0.29%, sys=0.88%, ctx=534, majf=0, minf=2 00:11:15.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.032 job2: (groupid=0, jobs=1): err= 0: pid=166565: Sat Dec 7 00:38:30 2024 00:11:15.032 read: IOPS=1026, BW=4107KiB/s (4206kB/s)(4128KiB/1005msec) 00:11:15.032 slat (nsec): min=5108, max=48880, avg=14241.80, stdev=6557.79 00:11:15.032 clat (usec): min=199, max=41256, avg=651.46, stdev=3788.25 00:11:15.032 lat (usec): min=207, max=41272, avg=665.70, stdev=3788.21 00:11:15.032 clat percentiles (usec): 00:11:15.032 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:11:15.032 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 269], 00:11:15.032 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 469], 95.00th=[ 519], 00:11:15.032 | 99.00th=[ 1926], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.032 | 99.99th=[41157] 00:11:15.032 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:11:15.032 slat (nsec): min=6703, max=78899, avg=10923.55, stdev=4949.92 00:11:15.032 clat (usec): min=136, max=1343, avg=190.55, stdev=46.16 00:11:15.032 lat (usec): min=146, max=1363, avg=201.48, stdev=47.90 00:11:15.032 clat percentiles (usec): 00:11:15.032 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:11:15.032 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:11:15.032 | 70.00th=[ 196], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 245], 00:11:15.032 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 717], 99.95th=[ 1352], 00:11:15.032 | 99.99th=[ 1352] 00:11:15.032 bw ( KiB/s): min= 4096, max= 8192, per=43.93%, avg=6144.00, stdev=2896.31, samples=2 00:11:15.032 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:15.032 lat (usec) : 250=73.56%, 500=23.60%, 750=2.38% 00:11:15.032 lat (msec) : 2=0.12%, 50=0.35% 00:11:15.032 cpu : usr=1.59%, sys=4.18%, ctx=2569, majf=0, minf=1 00:11:15.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.032 job3: (groupid=0, jobs=1): err= 0: pid=166566: Sat Dec 7 00:38:30 2024 00:11:15.032 read: IOPS=217, BW=869KiB/s (890kB/s)(880KiB/1013msec) 00:11:15.032 slat (nsec): min=10324, max=53407, avg=17587.82, stdev=7896.30 00:11:15.032 clat (usec): min=210, max=41022, avg=4025.34, stdev=11724.13 00:11:15.032 lat (usec): min=223, max=41034, avg=4042.93, stdev=11722.94 00:11:15.032 clat percentiles (usec): 00:11:15.032 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:11:15.032 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 255], 00:11:15.032 | 70.00th=[ 273], 80.00th=[ 453], 90.00th=[ 930], 95.00th=[41157], 00:11:15.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:15.032 | 99.99th=[41157] 00:11:15.032 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:15.032 slat (nsec): min=6446, max=69472, avg=11265.57, stdev=6343.02 00:11:15.032 clat (usec): min=154, max=384, avg=223.13, stdev=27.25 00:11:15.032 lat (usec): min=173, max=392, avg=234.40, stdev=25.80 00:11:15.032 clat percentiles (usec): 00:11:15.032 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 198], 00:11:15.032 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:11:15.032 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 255], 00:11:15.032 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 383], 99.95th=[ 383], 00:11:15.032 | 99.99th=[ 383] 00:11:15.032 bw ( KiB/s): min= 4096, max= 4096, per=29.29%, avg=4096.00, stdev= 0.00, samples=1 00:11:15.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:15.032 lat (usec) : 250=79.78%, 500=16.53%, 750=0.55%, 1000=0.14% 00:11:15.032 lat (msec) : 4=0.14%, 10=0.14%, 50=2.73% 00:11:15.032 cpu : usr=0.59%, sys=0.79%, ctx=732, majf=0, minf=1 00:11:15.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.032 issued rwts: total=220,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.032 00:11:15.032 Run status group 0 (all jobs): 00:11:15.032 READ: bw=7430KiB/s (7609kB/s), 85.9KiB/s-4107KiB/s (87.9kB/s-4206kB/s), io=7616KiB (7799kB), run=1005-1025msec 00:11:15.032 WRITE: bw=13.7MiB/s (14.3MB/s), 1998KiB/s-6113KiB/s (2046kB/s-6260kB/s), io=14.0MiB (14.7MB), run=1005-1025msec 00:11:15.032 00:11:15.032 Disk stats (read/write): 00:11:15.032 nvme0n1: ios=589/1024, merge=0/0, ticks=788/200, in_queue=988, util=97.29% 00:11:15.032 nvme0n2: ios=16/512, merge=0/0, ticks=665/95, in_queue=760, util=82.97% 00:11:15.032 nvme0n3: ios=1085/1536, merge=0/0, ticks=715/270, in_queue=985, util=97.49% 00:11:15.032 nvme0n4: ios=214/512, merge=0/0, ticks=635/112, in_queue=747, util=89.15% 00:11:15.032 00:38:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:15.032 [global] 00:11:15.032 thread=1 00:11:15.032 invalidate=1 00:11:15.032 rw=randwrite 00:11:15.032 time_based=1 00:11:15.032 runtime=1 00:11:15.032 ioengine=libaio 00:11:15.032 direct=1 00:11:15.032 bs=4096 00:11:15.032 iodepth=1 00:11:15.032 norandommap=0 00:11:15.032 numjobs=1 00:11:15.032 00:11:15.032 verify_dump=1 00:11:15.032 verify_backlog=512 00:11:15.032 verify_state_save=0 00:11:15.032 do_verify=1 00:11:15.032 verify=crc32c-intel 00:11:15.032 [job0] 00:11:15.032 filename=/dev/nvme0n1 00:11:15.032 [job1] 00:11:15.032 filename=/dev/nvme0n2 00:11:15.032 [job2] 00:11:15.032 filename=/dev/nvme0n3 00:11:15.032 [job3] 00:11:15.032 filename=/dev/nvme0n4 00:11:15.032 Could not set queue depth (nvme0n1) 00:11:15.032 Could not set queue depth (nvme0n2) 00:11:15.032 Could not set queue depth (nvme0n3) 00:11:15.032 Could not set queue depth (nvme0n4) 00:11:15.032 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.032 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.032 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.032 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.032 fio-3.35 00:11:15.032 Starting 4 threads 00:11:16.405 00:11:16.405 job0: (groupid=0, jobs=1): err= 0: pid=166790: Sat Dec 7 00:38:32 2024 00:11:16.405 read: IOPS=152, BW=610KiB/s (625kB/s)(632KiB/1036msec) 00:11:16.405 slat (nsec): min=8151, max=45428, avg=18893.38, stdev=6311.91 00:11:16.405 clat (usec): min=204, max=41308, avg=5666.74, stdev=13840.62 00:11:16.405 lat (usec): min=214, max=41329, avg=5685.63, stdev=13839.49 00:11:16.405 clat percentiles (usec): 00:11:16.405 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 245], 00:11:16.405 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:16.405 | 70.00th=[ 269], 80.00th=[ 322], 90.00th=[40633], 95.00th=[41157], 00:11:16.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.405 | 99.99th=[41157] 00:11:16.405 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:16.405 slat (nsec): min=10945, max=52004, avg=22411.53, stdev=4125.98 00:11:16.405 clat (usec): min=175, max=422, avg=237.12, stdev=33.57 00:11:16.405 lat (usec): min=196, max=447, avg=259.53, stdev=33.81 00:11:16.405 clat percentiles (usec): 00:11:16.405 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:11:16.405 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:11:16.405 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 302], 00:11:16.405 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 424], 99.95th=[ 424], 00:11:16.405 | 99.99th=[ 424] 00:11:16.405 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.405 lat (usec) : 250=67.61%, 500=29.25% 00:11:16.405 lat (msec) : 50=3.13% 00:11:16.405 cpu : usr=0.97%, sys=1.93%, ctx=670, majf=0, minf=1 00:11:16.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.405 issued rwts: total=158,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.405 job1: (groupid=0, jobs=1): err= 0: pid=166791: Sat Dec 7 00:38:32 2024 00:11:16.405 read: IOPS=699, BW=2798KiB/s (2866kB/s)(2888KiB/1032msec) 00:11:16.405 slat (nsec): min=5275, max=78585, avg=11556.90, stdev=8658.51 00:11:16.405 clat (usec): min=162, max=41992, avg=1139.47, stdev=6097.33 00:11:16.405 lat (usec): min=167, max=42014, avg=1151.02, stdev=6099.57 00:11:16.405 clat percentiles (usec): 00:11:16.405 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:11:16.405 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:11:16.405 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 334], 95.00th=[ 400], 00:11:16.405 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.405 | 99.99th=[42206] 00:11:16.405 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:11:16.405 slat (nsec): min=6819, max=46771, avg=14081.18, stdev=4953.49 00:11:16.405 clat (usec): min=121, max=311, avg=175.39, stdev=26.74 00:11:16.405 lat (usec): min=129, max=341, avg=189.47, stdev=28.76 00:11:16.405 clat percentiles (usec): 00:11:16.405 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 147], 00:11:16.405 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 186], 00:11:16.405 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 212], 00:11:16.405 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 306], 99.95th=[ 314], 00:11:16.405 | 99.99th=[ 314] 00:11:16.405 bw ( KiB/s): min= 8192, max= 8192, per=82.88%, avg=8192.00, stdev= 0.00, samples=1 00:11:16.405 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:16.405 lat (usec) : 250=92.50%, 500=6.53% 00:11:16.405 lat (msec) : 2=0.06%, 50=0.92% 00:11:16.405 cpu : usr=0.78%, sys=2.72%, ctx=1746, majf=0, minf=1 00:11:16.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.405 issued rwts: total=722,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.406 job2: (groupid=0, jobs=1): err= 0: pid=166793: Sat Dec 7 00:38:32 2024 00:11:16.406 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:11:16.406 slat (nsec): min=14969, max=33098, avg=23848.78, stdev=8064.96 00:11:16.406 clat (usec): min=269, max=41213, avg=38782.96, stdev=8625.34 00:11:16.406 lat (usec): min=286, max=41238, avg=38806.80, stdev=8627.39 00:11:16.406 clat percentiles (usec): 00:11:16.406 | 1.00th=[ 269], 5.00th=[31589], 10.00th=[40633], 20.00th=[40633], 00:11:16.406 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:16.406 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:16.406 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.406 | 99.99th=[41157] 00:11:16.406 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:16.406 slat (nsec): min=8079, max=40405, avg=15984.11, stdev=3129.69 00:11:16.406 clat (usec): min=163, max=274, avg=197.72, stdev=16.61 00:11:16.406 lat (usec): min=175, max=314, avg=213.70, stdev=17.17 00:11:16.406 clat percentiles (usec): 00:11:16.406 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:11:16.406 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:11:16.406 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:11:16.406 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 277], 00:11:16.406 | 99.99th=[ 277] 00:11:16.406 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.406 lat (usec) : 250=95.33%, 500=0.56% 00:11:16.406 lat (msec) : 50=4.11% 00:11:16.406 cpu : usr=0.50%, sys=0.80%, ctx=535, majf=0, minf=1 00:11:16.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.406 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.406 job3: (groupid=0, jobs=1): err= 0: pid=166794: Sat Dec 7 00:38:32 2024 00:11:16.406 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:11:16.406 slat (nsec): min=14535, max=31961, avg=22132.14, stdev=7896.26 00:11:16.406 clat (usec): min=40940, max=42088, avg=41530.96, stdev=483.90 00:11:16.406 lat (usec): min=40957, max=42120, avg=41553.09, stdev=487.44 00:11:16.406 clat percentiles (usec): 00:11:16.406 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:16.406 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:16.406 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:16.406 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.406 | 99.99th=[42206] 00:11:16.406 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:16.406 slat (nsec): min=8072, max=50468, avg=16236.20, stdev=3642.43 00:11:16.406 clat (usec): min=156, max=1175, avg=231.61, stdev=54.16 00:11:16.406 lat (usec): min=172, max=1191, avg=247.85, stdev=54.62 00:11:16.406 clat percentiles (usec): 00:11:16.406 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 196], 00:11:16.406 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:11:16.406 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:11:16.406 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:16.406 | 99.99th=[ 1172] 00:11:16.406 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:11:16.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:16.406 lat (usec) : 250=72.23%, 500=23.45%, 750=0.19% 00:11:16.406 lat (msec) : 2=0.19%, 50=3.94% 00:11:16.406 cpu : usr=0.40%, sys=0.80%, ctx=533, majf=0, minf=1 00:11:16.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.406 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.406 00:11:16.406 Run status group 0 (all jobs): 00:11:16.406 READ: bw=3568KiB/s (3653kB/s), 83.8KiB/s-2798KiB/s (85.8kB/s-2866kB/s), io=3696KiB (3785kB), run=1002-1036msec 00:11:16.406 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-3969KiB/s (2024kB/s-4064kB/s), io=10.0MiB (10.5MB), run=1002-1036msec 00:11:16.406 00:11:16.406 Disk stats (read/write): 00:11:16.406 nvme0n1: ios=202/512, merge=0/0, ticks=725/115, in_queue=840, util=86.97% 00:11:16.406 nvme0n2: ios=767/1024, merge=0/0, ticks=693/176, in_queue=869, util=91.07% 00:11:16.406 nvme0n3: ios=64/512, merge=0/0, ticks=827/93, in_queue=920, util=94.68% 00:11:16.406 nvme0n4: ios=74/512, merge=0/0, ticks=952/115, in_queue=1067, util=99.26% 00:11:16.406 00:38:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:16.406 [global] 00:11:16.406 thread=1 00:11:16.406 invalidate=1 00:11:16.406 rw=write 00:11:16.406 time_based=1 00:11:16.406 runtime=1 00:11:16.406 ioengine=libaio 00:11:16.406 direct=1 00:11:16.406 bs=4096 00:11:16.406 iodepth=128 00:11:16.406 norandommap=0 00:11:16.406 numjobs=1 00:11:16.406 00:11:16.406 verify_dump=1 00:11:16.406 verify_backlog=512 00:11:16.406 verify_state_save=0 00:11:16.406 do_verify=1 00:11:16.406 verify=crc32c-intel 00:11:16.406 [job0] 00:11:16.406 filename=/dev/nvme0n1 00:11:16.406 [job1] 00:11:16.406 filename=/dev/nvme0n2 00:11:16.406 [job2] 00:11:16.406 filename=/dev/nvme0n3 00:11:16.406 [job3] 00:11:16.406 filename=/dev/nvme0n4 00:11:16.406 Could not set queue depth (nvme0n1) 00:11:16.406 Could not set queue depth (nvme0n2) 00:11:16.406 Could not set queue depth (nvme0n3) 00:11:16.406 Could not set queue depth (nvme0n4) 00:11:16.406 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.406 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.406 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.406 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.406 fio-3.35 00:11:16.406 Starting 4 threads 00:11:17.782 00:11:17.782 job0: (groupid=0, jobs=1): err= 0: pid=167026: Sat Dec 7 00:38:33 2024 00:11:17.782 read: IOPS=4011, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1006msec) 00:11:17.782 slat (usec): min=2, max=18848, avg=132.17, stdev=936.69 00:11:17.782 clat (usec): min=3032, max=69109, avg=16088.26, stdev=9417.12 00:11:17.782 lat (usec): min=4438, max=69128, avg=16220.43, stdev=9483.94 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 6652], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11600], 00:11:17.782 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12780], 60.00th=[13566], 00:11:17.782 | 70.00th=[15139], 80.00th=[19006], 90.00th=[21890], 95.00th=[37487], 00:11:17.782 | 99.00th=[63177], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:11:17.782 | 99.99th=[68682] 00:11:17.782 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:11:17.782 slat (usec): min=2, max=15475, avg=105.05, stdev=579.25 00:11:17.782 clat (usec): min=2739, max=69092, avg=15256.86, stdev=6472.13 00:11:17.782 lat (usec): min=2754, max=69111, avg=15361.91, stdev=6516.14 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 2966], 5.00th=[ 5997], 10.00th=[ 8586], 20.00th=[11338], 00:11:17.782 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[15008], 00:11:17.782 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23725], 95.00th=[25560], 00:11:17.782 | 99.00th=[35390], 99.50th=[44827], 99.90th=[54264], 99.95th=[54264], 00:11:17.782 | 99.99th=[68682] 00:11:17.782 bw ( KiB/s): min=16384, max=16384, per=27.15%, avg=16384.00, stdev= 0.00, samples=2 00:11:17.782 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:17.782 lat (msec) : 4=0.87%, 10=9.06%, 20=72.21%, 50=16.60%, 100=1.25% 00:11:17.782 cpu : usr=4.18%, sys=7.06%, ctx=430, majf=0, minf=1 00:11:17.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:17.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.782 issued rwts: total=4036,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.782 job1: (groupid=0, jobs=1): err= 0: pid=167027: Sat Dec 7 00:38:33 2024 00:11:17.782 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:17.782 slat (usec): min=2, max=14948, avg=128.56, stdev=824.43 00:11:17.782 clat (usec): min=5592, max=39511, avg=16426.93, stdev=5751.66 00:11:17.782 lat (usec): min=5605, max=39540, avg=16555.49, stdev=5808.88 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[11076], 00:11:17.782 | 30.00th=[11731], 40.00th=[12911], 50.00th=[15008], 60.00th=[18220], 00:11:17.782 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24249], 95.00th=[27395], 00:11:17.782 | 99.00th=[29230], 99.50th=[30278], 99.90th=[32113], 99.95th=[38536], 00:11:17.782 | 99.99th=[39584] 00:11:17.782 write: IOPS=3972, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec); 0 zone resets 00:11:17.782 slat (usec): min=3, max=9593, avg=124.03, stdev=660.07 00:11:17.782 clat (usec): min=1230, max=53094, avg=17189.48, stdev=8187.64 00:11:17.782 lat (usec): min=1243, max=53112, avg=17313.50, stdev=8229.14 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 4686], 5.00th=[ 6849], 10.00th=[10028], 20.00th=[10945], 00:11:17.782 | 30.00th=[11338], 40.00th=[12518], 50.00th=[16319], 60.00th=[17433], 00:11:17.782 | 70.00th=[19792], 80.00th=[23200], 90.00th=[26084], 95.00th=[31065], 00:11:17.782 | 99.00th=[47973], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:11:17.782 | 99.99th=[53216] 00:11:17.782 bw ( KiB/s): min=14264, max=16624, per=25.59%, avg=15444.00, stdev=1668.77, samples=2 00:11:17.782 iops : min= 3566, max= 4156, avg=3861.00, stdev=417.19, samples=2 00:11:17.782 lat (msec) : 2=0.03%, 4=0.42%, 10=9.40%, 20=63.13%, 50=26.61% 00:11:17.782 lat (msec) : 100=0.41% 00:11:17.782 cpu : usr=4.89%, sys=7.58%, ctx=381, majf=0, minf=1 00:11:17.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:17.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.782 issued rwts: total=3584,3988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.782 job2: (groupid=0, jobs=1): err= 0: pid=167030: Sat Dec 7 00:38:33 2024 00:11:17.782 read: IOPS=3186, BW=12.4MiB/s (13.1MB/s)(12.5MiB/1004msec) 00:11:17.782 slat (usec): min=2, max=13536, avg=142.85, stdev=912.58 00:11:17.782 clat (usec): min=656, max=46731, avg=18274.81, stdev=8859.84 00:11:17.782 lat (usec): min=3959, max=50179, avg=18417.67, stdev=8947.16 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 4555], 5.00th=[10290], 10.00th=[11863], 20.00th=[12649], 00:11:17.782 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13435], 60.00th=[14353], 00:11:17.782 | 70.00th=[20579], 80.00th=[28705], 90.00th=[32113], 95.00th=[35914], 00:11:17.782 | 99.00th=[40633], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:11:17.782 | 99.99th=[46924] 00:11:17.782 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:17.782 slat (usec): min=3, max=24922, avg=143.65, stdev=957.91 00:11:17.782 clat (usec): min=6378, max=66800, avg=18795.66, stdev=10708.66 00:11:17.782 lat (usec): min=6415, max=66847, avg=18939.32, stdev=10804.00 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[12125], 20.00th=[12649], 00:11:17.782 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[14353], 00:11:17.782 | 70.00th=[16909], 80.00th=[23725], 90.00th=[39584], 95.00th=[43254], 00:11:17.782 | 99.00th=[54264], 99.50th=[54264], 99.90th=[56361], 99.95th=[57934], 00:11:17.782 | 99.99th=[66847] 00:11:17.782 bw ( KiB/s): min= 8192, max=20472, per=23.75%, avg=14332.00, stdev=8683.27, samples=2 00:11:17.782 iops : min= 2048, max= 5118, avg=3583.00, stdev=2170.82, samples=2 00:11:17.782 lat (usec) : 750=0.01% 00:11:17.782 lat (msec) : 4=0.03%, 10=3.33%, 20=68.05%, 50=27.64%, 100=0.93% 00:11:17.782 cpu : usr=3.59%, sys=6.68%, ctx=362, majf=0, minf=1 00:11:17.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:17.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.782 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.782 job3: (groupid=0, jobs=1): err= 0: pid=167031: Sat Dec 7 00:38:33 2024 00:11:17.782 read: IOPS=3706, BW=14.5MiB/s (15.2MB/s)(15.1MiB/1045msec) 00:11:17.782 slat (usec): min=2, max=11604, avg=131.74, stdev=820.31 00:11:17.782 clat (usec): min=2552, max=71519, avg=18235.86, stdev=9901.15 00:11:17.782 lat (usec): min=3487, max=71528, avg=18367.60, stdev=9951.39 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 6587], 5.00th=[10945], 10.00th=[12125], 20.00th=[12649], 00:11:17.782 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14091], 60.00th=[16712], 00:11:17.782 | 70.00th=[18220], 80.00th=[20841], 90.00th=[30540], 95.00th=[33817], 00:11:17.782 | 99.00th=[63177], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:11:17.782 | 99.99th=[71828] 00:11:17.782 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:11:17.782 slat (usec): min=3, max=11232, avg=109.29, stdev=707.07 00:11:17.782 clat (usec): min=1703, max=35872, avg=15138.28, stdev=5225.21 00:11:17.782 lat (usec): min=1708, max=35881, avg=15247.56, stdev=5287.06 00:11:17.782 clat percentiles (usec): 00:11:17.782 | 1.00th=[ 4293], 5.00th=[ 8225], 10.00th=[ 9765], 20.00th=[11863], 00:11:17.782 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13566], 60.00th=[13960], 00:11:17.782 | 70.00th=[16909], 80.00th=[19530], 90.00th=[23987], 95.00th=[25822], 00:11:17.782 | 99.00th=[28443], 99.50th=[28967], 99.90th=[32113], 99.95th=[32113], 00:11:17.782 | 99.99th=[35914] 00:11:17.782 bw ( KiB/s): min=14720, max=18048, per=27.15%, avg=16384.00, stdev=2353.25, samples=2 00:11:17.782 iops : min= 3680, max= 4512, avg=4096.00, stdev=588.31, samples=2 00:11:17.783 lat (msec) : 2=0.09%, 4=0.29%, 10=6.20%, 20=73.15%, 50=19.10% 00:11:17.783 lat (msec) : 100=1.18% 00:11:17.783 cpu : usr=3.93%, sys=5.27%, ctx=326, majf=0, minf=1 00:11:17.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:17.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.783 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.783 00:11:17.783 Run status group 0 (all jobs): 00:11:17.783 READ: bw=54.9MiB/s (57.6MB/s), 12.4MiB/s-15.7MiB/s (13.1MB/s-16.4MB/s), io=57.4MiB (60.2MB), run=1004-1045msec 00:11:17.783 WRITE: bw=58.9MiB/s (61.8MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=61.6MiB (64.6MB), run=1004-1045msec 00:11:17.783 00:11:17.783 Disk stats (read/write): 00:11:17.783 nvme0n1: ios=3122/3439, merge=0/0, ticks=46344/48660, in_queue=95004, util=86.67% 00:11:17.783 nvme0n2: ios=3116/3571, merge=0/0, ticks=40487/53475, in_queue=93962, util=88.53% 00:11:17.783 nvme0n3: ios=2608/2711, merge=0/0, ticks=16342/19709, in_queue=36051, util=99.90% 00:11:17.783 nvme0n4: ios=3628/3639, merge=0/0, ticks=31142/31055, in_queue=62197, util=90.44% 00:11:17.783 00:38:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:17.783 [global] 00:11:17.783 thread=1 00:11:17.783 invalidate=1 00:11:17.783 rw=randwrite 00:11:17.783 time_based=1 00:11:17.783 runtime=1 00:11:17.783 ioengine=libaio 00:11:17.783 direct=1 00:11:17.783 bs=4096 00:11:17.783 iodepth=128 00:11:17.783 norandommap=0 00:11:17.783 numjobs=1 00:11:17.783 00:11:17.783 verify_dump=1 00:11:17.783 verify_backlog=512 00:11:17.783 verify_state_save=0 00:11:17.783 do_verify=1 00:11:17.783 verify=crc32c-intel 00:11:17.783 [job0] 00:11:17.783 filename=/dev/nvme0n1 00:11:17.783 [job1] 00:11:17.783 filename=/dev/nvme0n2 00:11:17.783 [job2] 00:11:17.783 filename=/dev/nvme0n3 00:11:17.783 [job3] 00:11:17.783 filename=/dev/nvme0n4 00:11:17.783 Could not set queue depth (nvme0n1) 00:11:17.783 Could not set queue depth (nvme0n2) 00:11:17.783 Could not set queue depth (nvme0n3) 00:11:17.783 Could not set queue depth (nvme0n4) 00:11:18.041 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.041 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.041 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.041 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.041 fio-3.35 00:11:18.041 Starting 4 threads 00:11:19.415 00:11:19.415 job0: (groupid=0, jobs=1): err= 0: pid=167258: Sat Dec 7 00:38:35 2024 00:11:19.415 read: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1005msec) 00:11:19.415 slat (usec): min=2, max=47066, avg=122.77, stdev=985.18 00:11:19.415 clat (usec): min=1415, max=64160, avg=15841.89, stdev=9851.59 00:11:19.415 lat (usec): min=7392, max=64170, avg=15964.66, stdev=9904.68 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10683], 00:11:19.415 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12387], 60.00th=[12911], 00:11:19.415 | 70.00th=[15533], 80.00th=[21103], 90.00th=[23987], 95.00th=[30278], 00:11:19.415 | 99.00th=[61604], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:11:19.415 | 99.99th=[64226] 00:11:19.415 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:19.415 slat (usec): min=2, max=16813, avg=115.83, stdev=682.61 00:11:19.415 clat (usec): min=6454, max=66135, avg=15449.35, stdev=8920.61 00:11:19.415 lat (usec): min=6481, max=66157, avg=15565.17, stdev=8990.07 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:11:19.415 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:11:19.415 | 70.00th=[13960], 80.00th=[17695], 90.00th=[23462], 95.00th=[30016], 00:11:19.415 | 99.00th=[57410], 99.50th=[60556], 99.90th=[66323], 99.95th=[66323], 00:11:19.415 | 99.99th=[66323] 00:11:19.415 bw ( KiB/s): min=16384, max=16384, per=24.92%, avg=16384.00, stdev= 0.00, samples=2 00:11:19.415 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:19.415 lat (msec) : 2=0.01%, 10=6.73%, 20=73.54%, 50=17.09%, 100=2.63% 00:11:19.415 cpu : usr=3.78%, sys=6.57%, ctx=440, majf=0, minf=1 00:11:19.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:19.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.415 issued rwts: total=4028,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.415 job1: (groupid=0, jobs=1): err= 0: pid=167259: Sat Dec 7 00:38:35 2024 00:11:19.415 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:11:19.415 slat (usec): min=3, max=12146, avg=91.70, stdev=550.83 00:11:19.415 clat (usec): min=4832, max=29571, avg=12225.76, stdev=3072.53 00:11:19.415 lat (usec): min=4850, max=32935, avg=12317.47, stdev=3113.80 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:11:19.415 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:11:19.415 | 70.00th=[12125], 80.00th=[13566], 90.00th=[15664], 95.00th=[19530], 00:11:19.415 | 99.00th=[24249], 99.50th=[26346], 99.90th=[27657], 99.95th=[29492], 00:11:19.415 | 99.99th=[29492] 00:11:19.415 write: IOPS=5270, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1009msec); 0 zone resets 00:11:19.415 slat (usec): min=3, max=3601, avg=89.45, stdev=388.84 00:11:19.415 clat (usec): min=1127, max=29471, avg=12195.16, stdev=3669.86 00:11:19.415 lat (usec): min=1138, max=29480, avg=12284.62, stdev=3697.47 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10552], 00:11:19.415 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:19.415 | 70.00th=[11469], 80.00th=[13698], 90.00th=[16712], 95.00th=[21627], 00:11:19.415 | 99.00th=[25297], 99.50th=[27657], 99.90th=[29492], 99.95th=[29492], 00:11:19.415 | 99.99th=[29492] 00:11:19.415 bw ( KiB/s): min=16952, max=24576, per=31.59%, avg=20764.00, stdev=5390.98, samples=2 00:11:19.415 iops : min= 4238, max= 6144, avg=5191.00, stdev=1347.75, samples=2 00:11:19.415 lat (msec) : 2=0.03%, 4=0.13%, 10=11.28%, 20=82.32%, 50=6.24% 00:11:19.415 cpu : usr=6.35%, sys=10.02%, ctx=540, majf=0, minf=1 00:11:19.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:19.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.415 issued rwts: total=5120,5318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.415 job2: (groupid=0, jobs=1): err= 0: pid=167260: Sat Dec 7 00:38:35 2024 00:11:19.415 read: IOPS=3898, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1002msec) 00:11:19.415 slat (usec): min=2, max=8091, avg=115.77, stdev=654.73 00:11:19.415 clat (usec): min=816, max=26707, avg=14170.52, stdev=2740.02 00:11:19.415 lat (usec): min=3525, max=26720, avg=14286.29, stdev=2796.09 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 5473], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[12125], 00:11:19.415 | 30.00th=[12518], 40.00th=[13698], 50.00th=[14484], 60.00th=[14877], 00:11:19.415 | 70.00th=[15270], 80.00th=[16188], 90.00th=[16712], 95.00th=[18220], 00:11:19.415 | 99.00th=[22152], 99.50th=[23987], 99.90th=[26608], 99.95th=[26608], 00:11:19.415 | 99.99th=[26608] 00:11:19.415 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:11:19.415 slat (usec): min=3, max=8432, avg=126.70, stdev=547.62 00:11:19.415 clat (usec): min=6558, max=31432, avg=17423.31, stdev=5420.60 00:11:19.415 lat (usec): min=6613, max=31439, avg=17550.01, stdev=5461.25 00:11:19.415 clat percentiles (usec): 00:11:19.415 | 1.00th=[ 9241], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:11:19.415 | 30.00th=[13042], 40.00th=[14877], 50.00th=[15926], 60.00th=[17433], 00:11:19.415 | 70.00th=[20841], 80.00th=[22414], 90.00th=[25035], 95.00th=[28443], 00:11:19.416 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:11:19.416 | 99.99th=[31327] 00:11:19.416 bw ( KiB/s): min=16200, max=16568, per=24.92%, avg=16384.00, stdev=260.22, samples=2 00:11:19.416 iops : min= 4050, max= 4142, avg=4096.00, stdev=65.05, samples=2 00:11:19.416 lat (usec) : 1000=0.01% 00:11:19.416 lat (msec) : 4=0.40%, 10=3.14%, 20=79.16%, 50=17.30% 00:11:19.416 cpu : usr=3.00%, sys=5.99%, ctx=490, majf=0, minf=2 00:11:19.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:19.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.416 issued rwts: total=3906,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.416 job3: (groupid=0, jobs=1): err= 0: pid=167261: Sat Dec 7 00:38:35 2024 00:11:19.416 read: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1004msec) 00:11:19.416 slat (usec): min=2, max=51474, avg=194.76, stdev=1451.54 00:11:19.416 clat (usec): min=705, max=86422, avg=24468.66, stdev=15611.98 00:11:19.416 lat (usec): min=4630, max=86426, avg=24663.42, stdev=15674.85 00:11:19.416 clat percentiles (usec): 00:11:19.416 | 1.00th=[ 8979], 5.00th=[12256], 10.00th=[13435], 20.00th=[13960], 00:11:19.416 | 30.00th=[14353], 40.00th=[17695], 50.00th=[22152], 60.00th=[24249], 00:11:19.416 | 70.00th=[25035], 80.00th=[27919], 90.00th=[39584], 95.00th=[53740], 00:11:19.416 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:11:19.416 | 99.99th=[86508] 00:11:19.416 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:11:19.416 slat (usec): min=3, max=11366, avg=155.03, stdev=835.97 00:11:19.416 clat (usec): min=7539, max=48629, avg=20481.42, stdev=8074.53 00:11:19.416 lat (usec): min=7549, max=48659, avg=20636.45, stdev=8138.37 00:11:19.416 clat percentiles (usec): 00:11:19.416 | 1.00th=[10552], 5.00th=[11207], 10.00th=[13698], 20.00th=[13829], 00:11:19.416 | 30.00th=[14484], 40.00th=[15926], 50.00th=[17433], 60.00th=[22152], 00:11:19.416 | 70.00th=[23462], 80.00th=[25297], 90.00th=[31589], 95.00th=[38536], 00:11:19.416 | 99.00th=[44827], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:11:19.416 | 99.99th=[48497] 00:11:19.416 bw ( KiB/s): min=10944, max=12872, per=18.11%, avg=11908.00, stdev=1363.30, samples=2 00:11:19.416 iops : min= 2736, max= 3218, avg=2977.00, stdev=340.83, samples=2 00:11:19.416 lat (usec) : 750=0.02% 00:11:19.416 lat (msec) : 10=1.06%, 20=49.27%, 50=46.81%, 100=2.84% 00:11:19.416 cpu : usr=2.59%, sys=4.19%, ctx=262, majf=0, minf=1 00:11:19.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:19.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.416 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.416 00:11:19.416 Run status group 0 (all jobs): 00:11:19.416 READ: bw=60.6MiB/s (63.5MB/s), 10.1MiB/s-19.8MiB/s (10.6MB/s-20.8MB/s), io=61.1MiB (64.1MB), run=1002-1009msec 00:11:19.416 WRITE: bw=64.2MiB/s (67.3MB/s), 12.0MiB/s-20.6MiB/s (12.5MB/s-21.6MB/s), io=64.8MiB (67.9MB), run=1002-1009msec 00:11:19.416 00:11:19.416 Disk stats (read/write): 00:11:19.416 nvme0n1: ios=3116/3575, merge=0/0, ticks=21155/23218, in_queue=44373, util=86.77% 00:11:19.416 nvme0n2: ios=4264/4608, merge=0/0, ticks=17168/18331, in_queue=35499, util=97.97% 00:11:19.416 nvme0n3: ios=3146/3584, merge=0/0, ticks=22178/28918, in_queue=51096, util=89.04% 00:11:19.416 nvme0n4: ios=2048/2295, merge=0/0, ticks=18576/17422, in_queue=35998, util=89.06% 00:11:19.416 00:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:19.416 00:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=167405 00:11:19.416 00:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:19.416 00:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:19.416 [global] 00:11:19.416 thread=1 00:11:19.416 invalidate=1 00:11:19.416 rw=read 00:11:19.416 time_based=1 00:11:19.416 runtime=10 00:11:19.416 ioengine=libaio 00:11:19.416 direct=1 00:11:19.416 bs=4096 00:11:19.416 iodepth=1 00:11:19.416 norandommap=1 00:11:19.416 numjobs=1 00:11:19.416 00:11:19.416 [job0] 00:11:19.416 filename=/dev/nvme0n1 00:11:19.416 [job1] 00:11:19.416 filename=/dev/nvme0n2 00:11:19.416 [job2] 00:11:19.416 filename=/dev/nvme0n3 00:11:19.416 [job3] 00:11:19.416 filename=/dev/nvme0n4 00:11:19.416 Could not set queue depth (nvme0n1) 00:11:19.416 Could not set queue depth (nvme0n2) 00:11:19.416 Could not set queue depth (nvme0n3) 00:11:19.416 Could not set queue depth (nvme0n4) 00:11:19.416 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.416 fio-3.35 00:11:19.416 Starting 4 threads 00:11:22.697 00:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:22.697 00:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:22.697 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=442368, buflen=4096 00:11:22.697 fio: pid=167614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.697 00:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.697 00:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:22.955 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10264576, buflen=4096 00:11:22.955 fio: pid=167613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.213 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.213 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:23.213 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=471040, buflen=4096 00:11:23.213 fio: pid=167611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:23.471 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.471 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:23.471 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=21741568, buflen=4096 00:11:23.471 fio: pid=167612, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:23.471 00:11:23.471 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=167611: Sat Dec 7 00:38:39 2024 00:11:23.471 read: IOPS=33, BW=132KiB/s (135kB/s)(460KiB/3492msec) 00:11:23.471 slat (usec): min=5, max=13889, avg=142.09, stdev=1287.52 00:11:23.471 clat (usec): min=213, max=41129, avg=30006.11, stdev=18122.63 00:11:23.471 lat (usec): min=219, max=54975, avg=30148.80, stdev=18246.47 00:11:23.471 clat percentiles (usec): 00:11:23.471 | 1.00th=[ 237], 5.00th=[ 262], 10.00th=[ 289], 20.00th=[ 314], 00:11:23.471 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:23.471 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:23.471 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:23.471 | 99.99th=[41157] 00:11:23.471 bw ( KiB/s): min= 96, max= 336, per=1.62%, avg=138.67, stdev=96.75, samples=6 00:11:23.471 iops : min= 24, max= 84, avg=34.67, stdev=24.19, samples=6 00:11:23.471 lat (usec) : 250=4.31%, 500=22.41% 00:11:23.471 lat (msec) : 50=72.41% 00:11:23.471 cpu : usr=0.00%, sys=0.14%, ctx=119, majf=0, minf=2 00:11:23.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.471 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.471 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.471 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.471 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=167612: Sat Dec 7 00:38:39 2024 00:11:23.471 read: IOPS=1407, BW=5630KiB/s (5765kB/s)(20.7MiB/3771msec) 00:11:23.471 slat (usec): min=5, max=10738, avg=19.24, stdev=255.95 00:11:23.471 clat (usec): min=181, max=42405, avg=688.00, stdev=4205.89 00:11:23.471 lat (usec): min=186, max=51004, avg=705.97, stdev=4229.18 00:11:23.471 clat percentiles (usec): 00:11:23.471 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 221], 00:11:23.471 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:23.471 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 318], 00:11:23.471 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:23.471 | 99.99th=[42206] 00:11:23.471 bw ( KiB/s): min= 328, max=13520, per=70.84%, avg=6039.57, stdev=5423.59, samples=7 00:11:23.471 iops : min= 82, max= 3380, avg=1509.71, stdev=1355.89, samples=7 00:11:23.471 lat (usec) : 250=50.69%, 500=48.01%, 750=0.21% 00:11:23.471 lat (msec) : 4=0.02%, 50=1.05% 00:11:23.471 cpu : usr=1.22%, sys=2.84%, ctx=5312, majf=0, minf=1 00:11:23.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 issued rwts: total=5309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.472 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=167613: Sat Dec 7 00:38:39 2024 00:11:23.472 read: IOPS=774, BW=3099KiB/s (3173kB/s)(9.79MiB/3235msec) 00:11:23.472 slat (nsec): min=4542, max=71973, avg=13471.58, stdev=9975.47 00:11:23.472 clat (usec): min=175, max=42169, avg=1265.17, stdev=6345.12 00:11:23.472 lat (usec): min=180, max=42174, avg=1278.64, stdev=6346.47 00:11:23.472 clat percentiles (usec): 00:11:23.472 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:11:23.472 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 247], 00:11:23.472 | 70.00th=[ 269], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 449], 00:11:23.472 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:23.472 | 99.99th=[42206] 00:11:23.472 bw ( KiB/s): min= 96, max= 5744, per=33.91%, avg=2891.33, stdev=2655.21, samples=6 00:11:23.472 iops : min= 24, max= 1436, avg=722.83, stdev=663.80, samples=6 00:11:23.472 lat (usec) : 250=62.23%, 500=34.46%, 750=0.80% 00:11:23.472 lat (msec) : 50=2.47% 00:11:23.472 cpu : usr=0.53%, sys=1.08%, ctx=2507, majf=0, minf=1 00:11:23.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.472 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=167614: Sat Dec 7 00:38:39 2024 00:11:23.472 read: IOPS=37, BW=148KiB/s (152kB/s)(432KiB/2912msec) 00:11:23.472 slat (nsec): min=9398, max=45828, avg=23296.71, stdev=9659.57 00:11:23.472 clat (usec): min=276, max=41851, avg=26720.14, stdev=19474.38 00:11:23.472 lat (usec): min=293, max=41887, avg=26743.52, stdev=19476.88 00:11:23.472 clat percentiles (usec): 00:11:23.472 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 351], 20.00th=[ 420], 00:11:23.472 | 30.00th=[ 510], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:11:23.472 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:23.472 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:23.472 | 99.99th=[41681] 00:11:23.472 bw ( KiB/s): min= 96, max= 360, per=1.82%, avg=155.20, stdev=114.63, samples=5 00:11:23.472 iops : min= 24, max= 90, avg=38.80, stdev=28.66, samples=5 00:11:23.472 lat (usec) : 500=28.44%, 750=6.42% 00:11:23.472 lat (msec) : 50=64.22% 00:11:23.472 cpu : usr=0.03%, sys=0.10%, ctx=110, majf=0, minf=2 00:11:23.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:23.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.472 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:23.472 00:11:23.472 Run status group 0 (all jobs): 00:11:23.472 READ: bw=8525KiB/s (8730kB/s), 132KiB/s-5630KiB/s (135kB/s-5765kB/s), io=31.4MiB (32.9MB), run=2912-3771msec 00:11:23.472 00:11:23.472 Disk stats (read/write): 00:11:23.472 nvme0n1: ios=148/0, merge=0/0, ticks=4286/0, in_queue=4286, util=99.43% 00:11:23.472 nvme0n2: ios=5304/0, merge=0/0, ticks=3439/0, in_queue=3439, util=95.85% 00:11:23.472 nvme0n3: ios=2315/0, merge=0/0, ticks=3025/0, in_queue=3025, util=96.79% 00:11:23.472 nvme0n4: ios=107/0, merge=0/0, ticks=2846/0, in_queue=2846, util=96.75% 00:11:23.730 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.730 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:23.988 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.988 00:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:24.245 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.245 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:24.503 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:24.503 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:24.762 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:24.762 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 167405 00:11:24.762 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:24.762 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:25.020 nvmf hotplug test: fio failed as expected 00:11:25.020 00:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.278 rmmod nvme_tcp 00:11:25.278 rmmod nvme_fabrics 00:11:25.278 rmmod nvme_keyring 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 165473 ']' 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 165473 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 165473 ']' 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 165473 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165473 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165473' 00:11:25.278 killing process with pid 165473 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 165473 00:11:25.278 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 165473 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.536 00:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.448 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.448 00:11:27.448 real 0m24.013s 00:11:27.448 user 1m25.141s 00:11:27.448 sys 0m6.257s 00:11:27.448 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.448 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.448 ************************************ 00:11:27.448 END TEST nvmf_fio_target 00:11:27.448 ************************************ 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 ************************************ 00:11:27.707 START TEST nvmf_bdevio 00:11:27.707 ************************************ 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:27.707 * Looking for test storage... 00:11:27.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.707 --rc genhtml_branch_coverage=1 00:11:27.707 --rc genhtml_function_coverage=1 00:11:27.707 --rc genhtml_legend=1 00:11:27.707 --rc geninfo_all_blocks=1 00:11:27.707 --rc geninfo_unexecuted_blocks=1 00:11:27.707 00:11:27.707 ' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.707 --rc genhtml_branch_coverage=1 00:11:27.707 --rc genhtml_function_coverage=1 00:11:27.707 --rc genhtml_legend=1 00:11:27.707 --rc geninfo_all_blocks=1 00:11:27.707 --rc geninfo_unexecuted_blocks=1 00:11:27.707 00:11:27.707 ' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.707 --rc genhtml_branch_coverage=1 00:11:27.707 --rc genhtml_function_coverage=1 00:11:27.707 --rc genhtml_legend=1 00:11:27.707 --rc geninfo_all_blocks=1 00:11:27.707 --rc geninfo_unexecuted_blocks=1 00:11:27.707 00:11:27.707 ' 00:11:27.707 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.707 --rc genhtml_branch_coverage=1 00:11:27.708 --rc genhtml_function_coverage=1 00:11:27.708 --rc genhtml_legend=1 00:11:27.708 --rc geninfo_all_blocks=1 00:11:27.708 --rc geninfo_unexecuted_blocks=1 00:11:27.708 00:11:27.708 ' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.708 00:38:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:30.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:30.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:30.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:30.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.252 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.253 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.253 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.253 00:38:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:11:30.253 00:11:30.253 --- 10.0.0.2 ping statistics --- 00:11:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.253 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:30.253 00:11:30.253 --- 10.0.0.1 ping statistics --- 00:11:30.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.253 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=170248 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 170248 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 170248 ']' 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 [2024-12-07 00:38:46.122249] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:30.253 [2024-12-07 00:38:46.122332] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.253 [2024-12-07 00:38:46.195554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.253 [2024-12-07 00:38:46.242294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.253 [2024-12-07 00:38:46.242362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.253 [2024-12-07 00:38:46.242376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.253 [2024-12-07 00:38:46.242387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.253 [2024-12-07 00:38:46.242396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.253 [2024-12-07 00:38:46.244039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:30.253 [2024-12-07 00:38:46.244091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:30.253 [2024-12-07 00:38:46.244142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:30.253 [2024-12-07 00:38:46.244145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 [2024-12-07 00:38:46.388684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.253 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 Malloc0 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 [2024-12-07 00:38:46.455384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:30.512 { 00:11:30.512 "params": { 00:11:30.512 "name": "Nvme$subsystem", 00:11:30.512 "trtype": "$TEST_TRANSPORT", 00:11:30.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.512 "adrfam": "ipv4", 00:11:30.512 "trsvcid": "$NVMF_PORT", 00:11:30.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.512 "hdgst": ${hdgst:-false}, 00:11:30.512 "ddgst": ${ddgst:-false} 00:11:30.512 }, 00:11:30.512 "method": "bdev_nvme_attach_controller" 00:11:30.512 } 00:11:30.512 EOF 00:11:30.512 )") 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:30.512 00:38:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:30.512 "params": { 00:11:30.512 "name": "Nvme1", 00:11:30.512 "trtype": "tcp", 00:11:30.512 "traddr": "10.0.0.2", 00:11:30.512 "adrfam": "ipv4", 00:11:30.512 "trsvcid": "4420", 00:11:30.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:30.512 "hdgst": false, 00:11:30.512 "ddgst": false 00:11:30.512 }, 00:11:30.512 "method": "bdev_nvme_attach_controller" 00:11:30.512 }' 00:11:30.512 [2024-12-07 00:38:46.505496] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:30.512 [2024-12-07 00:38:46.505559] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170278 ] 00:11:30.512 [2024-12-07 00:38:46.577593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.512 [2024-12-07 00:38:46.629885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.512 [2024-12-07 00:38:46.629930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.512 [2024-12-07 00:38:46.629933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.079 I/O targets: 00:11:31.079 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:31.079 00:11:31.079 00:11:31.079 CUnit - A unit testing framework for C - Version 2.1-3 00:11:31.079 http://cunit.sourceforge.net/ 00:11:31.079 00:11:31.079 00:11:31.079 Suite: bdevio tests on: Nvme1n1 00:11:31.079 Test: blockdev write read block ...passed 00:11:31.079 Test: blockdev write zeroes read block ...passed 00:11:31.079 Test: blockdev write zeroes read no split ...passed 00:11:31.079 Test: blockdev write zeroes read split ...passed 00:11:31.079 Test: blockdev write zeroes read split partial ...passed 00:11:31.079 Test: blockdev reset ...[2024-12-07 00:38:47.083329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:31.079 [2024-12-07 00:38:47.083443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992700 (9): Bad file descriptor 00:11:31.079 [2024-12-07 00:38:47.217697] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:31.079 passed 00:11:31.337 Test: blockdev write read 8 blocks ...passed 00:11:31.337 Test: blockdev write read size > 128k ...passed 00:11:31.337 Test: blockdev write read invalid size ...passed 00:11:31.337 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:31.337 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:31.337 Test: blockdev write read max offset ...passed 00:11:31.337 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:31.337 Test: blockdev writev readv 8 blocks ...passed 00:11:31.337 Test: blockdev writev readv 30 x 1block ...passed 00:11:31.337 Test: blockdev writev readv block ...passed 00:11:31.337 Test: blockdev writev readv size > 128k ...passed 00:11:31.337 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:31.337 Test: blockdev comparev and writev ...[2024-12-07 00:38:47.470895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.470932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.470958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.470987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.471344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.471378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.471401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.471418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.471746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.471770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.471791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.471807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.472150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.472175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:31.337 [2024-12-07 00:38:47.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.337 [2024-12-07 00:38:47.472213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:31.595 passed 00:11:31.595 Test: blockdev nvme passthru rw ...passed 00:11:31.595 Test: blockdev nvme passthru vendor specific ...[2024-12-07 00:38:47.555266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.595 [2024-12-07 00:38:47.555294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:31.595 [2024-12-07 00:38:47.555436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.595 [2024-12-07 00:38:47.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:31.595 [2024-12-07 00:38:47.555592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.595 [2024-12-07 00:38:47.555615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:31.595 [2024-12-07 00:38:47.555750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.595 [2024-12-07 00:38:47.555773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:31.595 passed 00:11:31.595 Test: blockdev nvme admin passthru ...passed 00:11:31.595 Test: blockdev copy ...passed 00:11:31.595 00:11:31.595 Run Summary: Type Total Ran Passed Failed Inactive 00:11:31.595 suites 1 1 n/a 0 0 00:11:31.595 tests 23 23 23 0 0 00:11:31.595 asserts 152 152 152 0 n/a 00:11:31.595 00:11:31.595 Elapsed time = 1.293 seconds 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.853 rmmod nvme_tcp 00:11:31.853 rmmod nvme_fabrics 00:11:31.853 rmmod nvme_keyring 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 170248 ']' 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 170248 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 170248 ']' 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 170248 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170248 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170248' 00:11:31.853 killing process with pid 170248 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 170248 00:11:31.853 00:38:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 170248 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.111 00:38:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.658 00:11:34.658 real 0m6.558s 00:11:34.658 user 0m11.140s 00:11:34.658 sys 0m2.139s 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:34.658 ************************************ 00:11:34.658 END TEST nvmf_bdevio 00:11:34.658 ************************************ 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:34.658 00:11:34.658 real 3m56.023s 00:11:34.658 user 10m18.446s 00:11:34.658 sys 1m6.432s 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.658 ************************************ 00:11:34.658 END TEST nvmf_target_core 00:11:34.658 ************************************ 00:11:34.658 00:38:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.658 00:38:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.658 00:38:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.658 00:38:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.658 ************************************ 00:11:34.658 START TEST nvmf_target_extra 00:11:34.658 ************************************ 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.658 * Looking for test storage... 00:11:34.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:34.658 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.659 --rc genhtml_branch_coverage=1 00:11:34.659 --rc genhtml_function_coverage=1 00:11:34.659 --rc genhtml_legend=1 00:11:34.659 --rc geninfo_all_blocks=1 00:11:34.659 --rc geninfo_unexecuted_blocks=1 00:11:34.659 00:11:34.659 ' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.659 --rc genhtml_branch_coverage=1 00:11:34.659 --rc genhtml_function_coverage=1 00:11:34.659 --rc genhtml_legend=1 00:11:34.659 --rc geninfo_all_blocks=1 00:11:34.659 --rc geninfo_unexecuted_blocks=1 00:11:34.659 00:11:34.659 ' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.659 --rc genhtml_branch_coverage=1 00:11:34.659 --rc genhtml_function_coverage=1 00:11:34.659 --rc genhtml_legend=1 00:11:34.659 --rc geninfo_all_blocks=1 00:11:34.659 --rc geninfo_unexecuted_blocks=1 00:11:34.659 00:11:34.659 ' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.659 --rc genhtml_branch_coverage=1 00:11:34.659 --rc genhtml_function_coverage=1 00:11:34.659 --rc genhtml_legend=1 00:11:34.659 --rc geninfo_all_blocks=1 00:11:34.659 --rc geninfo_unexecuted_blocks=1 00:11:34.659 00:11:34.659 ' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.659 ************************************ 00:11:34.659 START TEST nvmf_example 00:11:34.659 ************************************ 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.659 * Looking for test storage... 00:11:34.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.659 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.660 --rc genhtml_branch_coverage=1 00:11:34.660 --rc genhtml_function_coverage=1 00:11:34.660 --rc genhtml_legend=1 00:11:34.660 --rc geninfo_all_blocks=1 00:11:34.660 --rc geninfo_unexecuted_blocks=1 00:11:34.660 00:11:34.660 ' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.660 --rc genhtml_branch_coverage=1 00:11:34.660 --rc genhtml_function_coverage=1 00:11:34.660 --rc genhtml_legend=1 00:11:34.660 --rc geninfo_all_blocks=1 00:11:34.660 --rc geninfo_unexecuted_blocks=1 00:11:34.660 00:11:34.660 ' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.660 --rc genhtml_branch_coverage=1 00:11:34.660 --rc genhtml_function_coverage=1 00:11:34.660 --rc genhtml_legend=1 00:11:34.660 --rc geninfo_all_blocks=1 00:11:34.660 --rc geninfo_unexecuted_blocks=1 00:11:34.660 00:11:34.660 ' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.660 --rc genhtml_branch_coverage=1 00:11:34.660 --rc genhtml_function_coverage=1 00:11:34.660 --rc genhtml_legend=1 00:11:34.660 --rc geninfo_all_blocks=1 00:11:34.660 --rc geninfo_unexecuted_blocks=1 00:11:34.660 00:11:34.660 ' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.660 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.661 00:38:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:37.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:37.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:37.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:37.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.191 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:11:37.192 00:11:37.192 --- 10.0.0.2 ping statistics --- 00:11:37.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.192 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:37.192 00:11:37.192 --- 10.0.0.1 ping statistics --- 00:11:37.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.192 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.192 00:38:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=172543 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 172543 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 172543 ']' 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:37.451 00:38:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:49.647 Initializing NVMe Controllers 00:11:49.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:49.647 Initialization complete. Launching workers. 00:11:49.647 ======================================================== 00:11:49.647 Latency(us) 00:11:49.647 Device Information : IOPS MiB/s Average min max 00:11:49.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15119.32 59.06 4232.59 868.54 15453.77 00:11:49.647 ======================================================== 00:11:49.647 Total : 15119.32 59.06 4232.59 868.54 15453.77 00:11:49.647 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.647 rmmod nvme_tcp 00:11:49.647 rmmod nvme_fabrics 00:11:49.647 rmmod nvme_keyring 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 172543 ']' 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 172543 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 172543 ']' 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 172543 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172543 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172543' 00:11:49.647 killing process with pid 172543 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 172543 00:11:49.647 00:39:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 172543 00:11:49.647 nvmf threads initialize successfully 00:11:49.647 bdev subsystem init successfully 00:11:49.647 created a nvmf target service 00:11:49.647 create targets's poll groups done 00:11:49.647 all subsystems of target started 00:11:49.647 nvmf target is running 00:11:49.647 all subsystems of target stopped 00:11:49.647 destroy targets's poll groups done 00:11:49.647 destroyed the nvmf target service 00:11:49.647 bdev subsystem finish successfully 00:11:49.647 nvmf threads destroy successfully 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.647 00:39:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.215 00:11:50.215 real 0m15.662s 00:11:50.215 user 0m42.952s 00:11:50.215 sys 0m3.432s 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.215 ************************************ 00:11:50.215 END TEST nvmf_example 00:11:50.215 ************************************ 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.215 ************************************ 00:11:50.215 START TEST nvmf_filesystem 00:11:50.215 ************************************ 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:50.215 * Looking for test storage... 00:11:50.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.215 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.216 --rc genhtml_branch_coverage=1 00:11:50.216 --rc genhtml_function_coverage=1 00:11:50.216 --rc genhtml_legend=1 00:11:50.216 --rc geninfo_all_blocks=1 00:11:50.216 --rc geninfo_unexecuted_blocks=1 00:11:50.216 00:11:50.216 ' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.216 --rc genhtml_branch_coverage=1 00:11:50.216 --rc genhtml_function_coverage=1 00:11:50.216 --rc genhtml_legend=1 00:11:50.216 --rc geninfo_all_blocks=1 00:11:50.216 --rc geninfo_unexecuted_blocks=1 00:11:50.216 00:11:50.216 ' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.216 --rc genhtml_branch_coverage=1 00:11:50.216 --rc genhtml_function_coverage=1 00:11:50.216 --rc genhtml_legend=1 00:11:50.216 --rc geninfo_all_blocks=1 00:11:50.216 --rc geninfo_unexecuted_blocks=1 00:11:50.216 00:11:50.216 ' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.216 --rc genhtml_branch_coverage=1 00:11:50.216 --rc genhtml_function_coverage=1 00:11:50.216 --rc genhtml_legend=1 00:11:50.216 --rc geninfo_all_blocks=1 00:11:50.216 --rc geninfo_unexecuted_blocks=1 00:11:50.216 00:11:50.216 ' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:50.216 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:50.217 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:50.480 #define SPDK_CONFIG_H 00:11:50.480 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:50.480 #define SPDK_CONFIG_APPS 1 00:11:50.480 #define SPDK_CONFIG_ARCH native 00:11:50.480 #undef SPDK_CONFIG_ASAN 00:11:50.480 #undef SPDK_CONFIG_AVAHI 00:11:50.480 #undef SPDK_CONFIG_CET 00:11:50.480 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:50.480 #define SPDK_CONFIG_COVERAGE 1 00:11:50.480 #define SPDK_CONFIG_CROSS_PREFIX 00:11:50.480 #undef SPDK_CONFIG_CRYPTO 00:11:50.480 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:50.480 #undef SPDK_CONFIG_CUSTOMOCF 00:11:50.480 #undef SPDK_CONFIG_DAOS 00:11:50.480 #define SPDK_CONFIG_DAOS_DIR 00:11:50.480 #define SPDK_CONFIG_DEBUG 1 00:11:50.480 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:50.480 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:50.480 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:50.480 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:50.480 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:50.480 #undef SPDK_CONFIG_DPDK_UADK 00:11:50.480 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:50.480 #define SPDK_CONFIG_EXAMPLES 1 00:11:50.480 #undef SPDK_CONFIG_FC 00:11:50.480 #define SPDK_CONFIG_FC_PATH 00:11:50.480 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:50.480 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:50.480 #define SPDK_CONFIG_FSDEV 1 00:11:50.480 #undef SPDK_CONFIG_FUSE 00:11:50.480 #undef SPDK_CONFIG_FUZZER 00:11:50.480 #define SPDK_CONFIG_FUZZER_LIB 00:11:50.480 #undef SPDK_CONFIG_GOLANG 00:11:50.480 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:50.480 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:50.480 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:50.480 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:50.480 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:50.480 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:50.480 #undef SPDK_CONFIG_HAVE_LZ4 00:11:50.480 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:50.480 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:50.480 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:50.480 #define SPDK_CONFIG_IDXD 1 00:11:50.480 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:50.480 #undef SPDK_CONFIG_IPSEC_MB 00:11:50.480 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:50.480 #define SPDK_CONFIG_ISAL 1 00:11:50.480 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:50.480 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:50.480 #define SPDK_CONFIG_LIBDIR 00:11:50.480 #undef SPDK_CONFIG_LTO 00:11:50.480 #define SPDK_CONFIG_MAX_LCORES 128 00:11:50.480 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:50.480 #define SPDK_CONFIG_NVME_CUSE 1 00:11:50.480 #undef SPDK_CONFIG_OCF 00:11:50.480 #define SPDK_CONFIG_OCF_PATH 00:11:50.480 #define SPDK_CONFIG_OPENSSL_PATH 00:11:50.480 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:50.480 #define SPDK_CONFIG_PGO_DIR 00:11:50.480 #undef SPDK_CONFIG_PGO_USE 00:11:50.480 #define SPDK_CONFIG_PREFIX /usr/local 00:11:50.480 #undef SPDK_CONFIG_RAID5F 00:11:50.480 #undef SPDK_CONFIG_RBD 00:11:50.480 #define SPDK_CONFIG_RDMA 1 00:11:50.480 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:50.480 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:50.480 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:50.480 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:50.480 #define SPDK_CONFIG_SHARED 1 00:11:50.480 #undef SPDK_CONFIG_SMA 00:11:50.480 #define SPDK_CONFIG_TESTS 1 00:11:50.480 #undef SPDK_CONFIG_TSAN 00:11:50.480 #define SPDK_CONFIG_UBLK 1 00:11:50.480 #define SPDK_CONFIG_UBSAN 1 00:11:50.480 #undef SPDK_CONFIG_UNIT_TESTS 00:11:50.480 #undef SPDK_CONFIG_URING 00:11:50.480 #define SPDK_CONFIG_URING_PATH 00:11:50.480 #undef SPDK_CONFIG_URING_ZNS 00:11:50.480 #undef SPDK_CONFIG_USDT 00:11:50.480 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:50.480 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:50.480 #define SPDK_CONFIG_VFIO_USER 1 00:11:50.480 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:50.480 #define SPDK_CONFIG_VHOST 1 00:11:50.480 #define SPDK_CONFIG_VIRTIO 1 00:11:50.480 #undef SPDK_CONFIG_VTUNE 00:11:50.480 #define SPDK_CONFIG_VTUNE_DIR 00:11:50.480 #define SPDK_CONFIG_WERROR 1 00:11:50.480 #define SPDK_CONFIG_WPDK_DIR 00:11:50.480 #undef SPDK_CONFIG_XNVME 00:11:50.480 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.480 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:50.481 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:50.482 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 174346 ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 174346 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3bW5bM 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3bW5bM/tests/target /tmp/spdk.3bW5bM 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=53781647360 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988511744 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8206864384 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984224768 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.483 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993940480 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:50.484 * Looking for test storage... 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=53781647360 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10421456896 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:50.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.484 --rc genhtml_branch_coverage=1 00:11:50.484 --rc genhtml_function_coverage=1 00:11:50.484 --rc genhtml_legend=1 00:11:50.484 --rc geninfo_all_blocks=1 00:11:50.484 --rc geninfo_unexecuted_blocks=1 00:11:50.484 00:11:50.484 ' 00:11:50.484 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:50.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.484 --rc genhtml_branch_coverage=1 00:11:50.485 --rc genhtml_function_coverage=1 00:11:50.485 --rc genhtml_legend=1 00:11:50.485 --rc geninfo_all_blocks=1 00:11:50.485 --rc geninfo_unexecuted_blocks=1 00:11:50.485 00:11:50.485 ' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.485 --rc genhtml_branch_coverage=1 00:11:50.485 --rc genhtml_function_coverage=1 00:11:50.485 --rc genhtml_legend=1 00:11:50.485 --rc geninfo_all_blocks=1 00:11:50.485 --rc geninfo_unexecuted_blocks=1 00:11:50.485 00:11:50.485 ' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.485 --rc genhtml_branch_coverage=1 00:11:50.485 --rc genhtml_function_coverage=1 00:11:50.485 --rc genhtml_legend=1 00:11:50.485 --rc geninfo_all_blocks=1 00:11:50.485 --rc geninfo_unexecuted_blocks=1 00:11:50.485 00:11:50.485 ' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.485 00:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.018 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:53.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:53.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:53.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:53.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:11:53.019 00:11:53.019 --- 10.0.0.2 ping statistics --- 00:11:53.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.019 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:53.019 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:11:53.019 00:11:53.020 --- 10.0.0.1 ping statistics --- 00:11:53.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.020 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 ************************************ 00:11:53.020 START TEST nvmf_filesystem_no_in_capsule 00:11:53.020 ************************************ 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=176447 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 176447 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 176447 ']' 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.020 00:39:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 [2024-12-07 00:39:08.909335] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:53.020 [2024-12-07 00:39:08.909424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.020 [2024-12-07 00:39:08.985934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.020 [2024-12-07 00:39:09.033366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.020 [2024-12-07 00:39:09.033416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.020 [2024-12-07 00:39:09.033440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.020 [2024-12-07 00:39:09.033451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.020 [2024-12-07 00:39:09.033460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.020 [2024-12-07 00:39:09.034938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.020 [2024-12-07 00:39:09.035033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.020 [2024-12-07 00:39:09.035064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.020 [2024-12-07 00:39:09.035068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.020 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 [2024-12-07 00:39:09.168557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 [2024-12-07 00:39:09.355558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:53.279 { 00:11:53.279 "name": "Malloc1", 00:11:53.279 "aliases": [ 00:11:53.279 "c94ab188-7bb6-461e-bd0b-4c2daad96dfb" 00:11:53.279 ], 00:11:53.279 "product_name": "Malloc disk", 00:11:53.279 "block_size": 512, 00:11:53.279 "num_blocks": 1048576, 00:11:53.279 "uuid": "c94ab188-7bb6-461e-bd0b-4c2daad96dfb", 00:11:53.279 "assigned_rate_limits": { 00:11:53.279 "rw_ios_per_sec": 0, 00:11:53.279 "rw_mbytes_per_sec": 0, 00:11:53.279 "r_mbytes_per_sec": 0, 00:11:53.279 "w_mbytes_per_sec": 0 00:11:53.279 }, 00:11:53.279 "claimed": true, 00:11:53.279 "claim_type": "exclusive_write", 00:11:53.279 "zoned": false, 00:11:53.279 "supported_io_types": { 00:11:53.279 "read": true, 00:11:53.279 "write": true, 00:11:53.279 "unmap": true, 00:11:53.279 "flush": true, 00:11:53.279 "reset": true, 00:11:53.279 "nvme_admin": false, 00:11:53.279 "nvme_io": false, 00:11:53.279 "nvme_io_md": false, 00:11:53.279 "write_zeroes": true, 00:11:53.279 "zcopy": true, 00:11:53.279 "get_zone_info": false, 00:11:53.279 "zone_management": false, 00:11:53.279 "zone_append": false, 00:11:53.279 "compare": false, 00:11:53.279 "compare_and_write": false, 00:11:53.279 "abort": true, 00:11:53.279 "seek_hole": false, 00:11:53.279 "seek_data": false, 00:11:53.279 "copy": true, 00:11:53.279 "nvme_iov_md": false 00:11:53.279 }, 00:11:53.279 "memory_domains": [ 00:11:53.279 { 00:11:53.279 "dma_device_id": "system", 00:11:53.279 "dma_device_type": 1 00:11:53.279 }, 00:11:53.279 { 00:11:53.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.279 "dma_device_type": 2 00:11:53.279 } 00:11:53.279 ], 00:11:53.279 "driver_specific": {} 00:11:53.279 } 00:11:53.279 ]' 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:53.279 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:53.538 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:53.538 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:53.538 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:53.538 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.538 00:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.105 00:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.105 00:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.105 00:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.105 00:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.105 00:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.001 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.300 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:56.557 00:39:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:57.489 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:57.489 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:57.489 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.489 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.489 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.747 ************************************ 00:11:57.747 START TEST filesystem_ext4 00:11:57.747 ************************************ 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:57.747 mke2fs 1.47.0 (5-Feb-2023) 00:11:57.747 Discarding device blocks: 0/522240 done 00:11:57.747 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:57.747 Filesystem UUID: 00f54269-6e53-42ce-81fe-052bbad337af 00:11:57.747 Superblock backups stored on blocks: 00:11:57.747 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:57.747 00:11:57.747 Allocating group tables: 0/64 done 00:11:57.747 Writing inode tables: 0/64 done 00:11:57.747 Creating journal (8192 blocks): done 00:11:57.747 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.747 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.747 00:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 176447 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.316 00:39:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.316 00:12:04.316 real 0m6.354s 00:12:04.316 user 0m0.011s 00:12:04.316 sys 0m0.115s 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:04.316 ************************************ 00:12:04.316 END TEST filesystem_ext4 00:12:04.316 ************************************ 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.316 ************************************ 00:12:04.316 START TEST filesystem_btrfs 00:12:04.316 ************************************ 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:04.316 btrfs-progs v6.8.1 00:12:04.316 See https://btrfs.readthedocs.io for more information. 00:12:04.316 00:12:04.316 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:04.316 NOTE: several default settings have changed in version 5.15, please make sure 00:12:04.316 this does not affect your deployments: 00:12:04.316 - DUP for metadata (-m dup) 00:12:04.316 - enabled no-holes (-O no-holes) 00:12:04.316 - enabled free-space-tree (-R free-space-tree) 00:12:04.316 00:12:04.316 Label: (null) 00:12:04.316 UUID: 823329de-8659-4ad8-b855-4aaa810ea5cd 00:12:04.316 Node size: 16384 00:12:04.316 Sector size: 4096 (CPU page size: 4096) 00:12:04.316 Filesystem size: 510.00MiB 00:12:04.316 Block group profiles: 00:12:04.316 Data: single 8.00MiB 00:12:04.316 Metadata: DUP 32.00MiB 00:12:04.316 System: DUP 8.00MiB 00:12:04.316 SSD detected: yes 00:12:04.316 Zoned device: no 00:12:04.316 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:04.316 Checksum: crc32c 00:12:04.316 Number of devices: 1 00:12:04.316 Devices: 00:12:04.316 ID SIZE PATH 00:12:04.316 1 510.00MiB /dev/nvme0n1p1 00:12:04.316 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.316 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.574 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.574 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 176447 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.575 00:12:04.575 real 0m0.623s 00:12:04.575 user 0m0.013s 00:12:04.575 sys 0m0.143s 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:04.575 ************************************ 00:12:04.575 END TEST filesystem_btrfs 00:12:04.575 ************************************ 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.575 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.833 ************************************ 00:12:04.833 START TEST filesystem_xfs 00:12:04.833 ************************************ 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:04.833 00:39:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:04.833 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:04.833 = sectsz=512 attr=2, projid32bit=1 00:12:04.833 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:04.833 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:04.833 data = bsize=4096 blocks=130560, imaxpct=25 00:12:04.833 = sunit=0 swidth=0 blks 00:12:04.833 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:04.833 log =internal log bsize=4096 blocks=16384, version=2 00:12:04.833 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:04.833 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.207 Discarding blocks...Done. 00:12:06.207 00:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:06.207 00:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:08.111 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 176447 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.370 00:12:08.370 real 0m3.585s 00:12:08.370 user 0m0.016s 00:12:08.370 sys 0m0.100s 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:08.370 ************************************ 00:12:08.370 END TEST filesystem_xfs 00:12:08.370 ************************************ 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 176447 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 176447 ']' 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 176447 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176447 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176447' 00:12:08.370 killing process with pid 176447 00:12:08.370 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 176447 00:12:08.629 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 176447 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:08.888 00:12:08.888 real 0m16.050s 00:12:08.888 user 1m2.068s 00:12:08.888 sys 0m2.351s 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 ************************************ 00:12:08.888 END TEST nvmf_filesystem_no_in_capsule 00:12:08.888 ************************************ 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 ************************************ 00:12:08.888 START TEST nvmf_filesystem_in_capsule 00:12:08.888 ************************************ 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=178591 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 178591 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 178591 ']' 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.888 00:39:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 [2024-12-07 00:39:25.012212] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:08.888 [2024-12-07 00:39:25.012294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.149 [2024-12-07 00:39:25.089864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.149 [2024-12-07 00:39:25.139260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.149 [2024-12-07 00:39:25.139318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.149 [2024-12-07 00:39:25.139331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.149 [2024-12-07 00:39:25.139343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.149 [2024-12-07 00:39:25.139352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.149 [2024-12-07 00:39:25.140932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.149 [2024-12-07 00:39:25.141002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.149 [2024-12-07 00:39:25.141028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.149 [2024-12-07 00:39:25.141032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.149 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.149 [2024-12-07 00:39:25.293181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.406 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.406 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:09.406 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.407 Malloc1 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.407 [2024-12-07 00:39:25.495594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:09.407 { 00:12:09.407 "name": "Malloc1", 00:12:09.407 "aliases": [ 00:12:09.407 "28b7c1e3-703b-4ae3-81ee-2e9d06d58ddc" 00:12:09.407 ], 00:12:09.407 "product_name": "Malloc disk", 00:12:09.407 "block_size": 512, 00:12:09.407 "num_blocks": 1048576, 00:12:09.407 "uuid": "28b7c1e3-703b-4ae3-81ee-2e9d06d58ddc", 00:12:09.407 "assigned_rate_limits": { 00:12:09.407 "rw_ios_per_sec": 0, 00:12:09.407 "rw_mbytes_per_sec": 0, 00:12:09.407 "r_mbytes_per_sec": 0, 00:12:09.407 "w_mbytes_per_sec": 0 00:12:09.407 }, 00:12:09.407 "claimed": true, 00:12:09.407 "claim_type": "exclusive_write", 00:12:09.407 "zoned": false, 00:12:09.407 "supported_io_types": { 00:12:09.407 "read": true, 00:12:09.407 "write": true, 00:12:09.407 "unmap": true, 00:12:09.407 "flush": true, 00:12:09.407 "reset": true, 00:12:09.407 "nvme_admin": false, 00:12:09.407 "nvme_io": false, 00:12:09.407 "nvme_io_md": false, 00:12:09.407 "write_zeroes": true, 00:12:09.407 "zcopy": true, 00:12:09.407 "get_zone_info": false, 00:12:09.407 "zone_management": false, 00:12:09.407 "zone_append": false, 00:12:09.407 "compare": false, 00:12:09.407 "compare_and_write": false, 00:12:09.407 "abort": true, 00:12:09.407 "seek_hole": false, 00:12:09.407 "seek_data": false, 00:12:09.407 "copy": true, 00:12:09.407 "nvme_iov_md": false 00:12:09.407 }, 00:12:09.407 "memory_domains": [ 00:12:09.407 { 00:12:09.407 "dma_device_id": "system", 00:12:09.407 "dma_device_type": 1 00:12:09.407 }, 00:12:09.407 { 00:12:09.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.407 "dma_device_type": 2 00:12:09.407 } 00:12:09.407 ], 00:12:09.407 "driver_specific": {} 00:12:09.407 } 00:12:09.407 ]' 00:12:09.407 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:09.665 00:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.230 00:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.230 00:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.230 00:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.230 00:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.230 00:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:12.759 00:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.693 ************************************ 00:12:13.693 START TEST filesystem_in_capsule_ext4 00:12:13.693 ************************************ 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:13.693 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:13.694 00:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:13.694 mke2fs 1.47.0 (5-Feb-2023) 00:12:13.952 Discarding device blocks: 0/522240 done 00:12:13.952 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:13.952 Filesystem UUID: b004af92-6d4b-4774-9538-a11efefb59cb 00:12:13.952 Superblock backups stored on blocks: 00:12:13.952 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:13.952 00:12:13.952 Allocating group tables: 0/64 done 00:12:13.952 Writing inode tables: 0/64 done 00:12:13.952 Creating journal (8192 blocks): done 00:12:13.952 Writing superblocks and filesystem accounting information: 0/64 done 00:12:13.952 00:12:13.952 00:39:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:13.952 00:39:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 178591 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.501 00:12:20.501 real 0m5.822s 00:12:20.501 user 0m0.025s 00:12:20.501 sys 0m0.065s 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:20.501 ************************************ 00:12:20.501 END TEST filesystem_in_capsule_ext4 00:12:20.501 ************************************ 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.501 ************************************ 00:12:20.501 START TEST filesystem_in_capsule_btrfs 00:12:20.501 ************************************ 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:20.501 00:39:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:20.501 btrfs-progs v6.8.1 00:12:20.501 See https://btrfs.readthedocs.io for more information. 00:12:20.501 00:12:20.501 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:20.501 NOTE: several default settings have changed in version 5.15, please make sure 00:12:20.501 this does not affect your deployments: 00:12:20.501 - DUP for metadata (-m dup) 00:12:20.501 - enabled no-holes (-O no-holes) 00:12:20.501 - enabled free-space-tree (-R free-space-tree) 00:12:20.501 00:12:20.501 Label: (null) 00:12:20.501 UUID: 9fba0a3b-c9f0-492a-8689-3648ff44fc61 00:12:20.502 Node size: 16384 00:12:20.502 Sector size: 4096 (CPU page size: 4096) 00:12:20.502 Filesystem size: 510.00MiB 00:12:20.502 Block group profiles: 00:12:20.502 Data: single 8.00MiB 00:12:20.502 Metadata: DUP 32.00MiB 00:12:20.502 System: DUP 8.00MiB 00:12:20.502 SSD detected: yes 00:12:20.502 Zoned device: no 00:12:20.502 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:20.502 Checksum: crc32c 00:12:20.502 Number of devices: 1 00:12:20.502 Devices: 00:12:20.502 ID SIZE PATH 00:12:20.502 1 510.00MiB /dev/nvme0n1p1 00:12:20.502 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 178591 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.502 00:12:20.502 real 0m0.899s 00:12:20.502 user 0m0.030s 00:12:20.502 sys 0m0.086s 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.502 ************************************ 00:12:20.502 END TEST filesystem_in_capsule_btrfs 00:12:20.502 ************************************ 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.502 ************************************ 00:12:20.502 START TEST filesystem_in_capsule_xfs 00:12:20.502 ************************************ 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:20.502 00:39:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:20.760 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:20.760 = sectsz=512 attr=2, projid32bit=1 00:12:20.760 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:20.760 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:20.760 data = bsize=4096 blocks=130560, imaxpct=25 00:12:20.760 = sunit=0 swidth=0 blks 00:12:20.760 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:20.760 log =internal log bsize=4096 blocks=16384, version=2 00:12:20.760 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:20.760 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:21.693 Discarding blocks...Done. 00:12:21.693 00:39:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:21.693 00:39:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 178591 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.592 00:12:23.592 real 0m2.934s 00:12:23.592 user 0m0.017s 00:12:23.592 sys 0m0.056s 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:23.592 ************************************ 00:12:23.592 END TEST filesystem_in_capsule_xfs 00:12:23.592 ************************************ 00:12:23.592 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 178591 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 178591 ']' 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 178591 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 178591 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 178591' 00:12:23.851 killing process with pid 178591 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 178591 00:12:23.851 00:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 178591 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:24.418 00:12:24.418 real 0m15.410s 00:12:24.418 user 0m59.630s 00:12:24.418 sys 0m2.074s 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.418 ************************************ 00:12:24.418 END TEST nvmf_filesystem_in_capsule 00:12:24.418 ************************************ 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.418 rmmod nvme_tcp 00:12:24.418 rmmod nvme_fabrics 00:12:24.418 rmmod nvme_keyring 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.418 00:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.960 00:12:26.960 real 0m36.297s 00:12:26.960 user 2m2.798s 00:12:26.960 sys 0m6.169s 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.960 ************************************ 00:12:26.960 END TEST nvmf_filesystem 00:12:26.960 ************************************ 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.960 ************************************ 00:12:26.960 START TEST nvmf_target_discovery 00:12:26.960 ************************************ 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.960 * Looking for test storage... 00:12:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.960 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.961 --rc genhtml_branch_coverage=1 00:12:26.961 --rc genhtml_function_coverage=1 00:12:26.961 --rc genhtml_legend=1 00:12:26.961 --rc geninfo_all_blocks=1 00:12:26.961 --rc geninfo_unexecuted_blocks=1 00:12:26.961 00:12:26.961 ' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.961 --rc genhtml_branch_coverage=1 00:12:26.961 --rc genhtml_function_coverage=1 00:12:26.961 --rc genhtml_legend=1 00:12:26.961 --rc geninfo_all_blocks=1 00:12:26.961 --rc geninfo_unexecuted_blocks=1 00:12:26.961 00:12:26.961 ' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.961 --rc genhtml_branch_coverage=1 00:12:26.961 --rc genhtml_function_coverage=1 00:12:26.961 --rc genhtml_legend=1 00:12:26.961 --rc geninfo_all_blocks=1 00:12:26.961 --rc geninfo_unexecuted_blocks=1 00:12:26.961 00:12:26.961 ' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.961 --rc genhtml_branch_coverage=1 00:12:26.961 --rc genhtml_function_coverage=1 00:12:26.961 --rc genhtml_legend=1 00:12:26.961 --rc geninfo_all_blocks=1 00:12:26.961 --rc geninfo_unexecuted_blocks=1 00:12:26.961 00:12:26.961 ' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.961 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.962 00:39:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.871 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.872 00:39:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.872 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:29.133 00:12:29.133 --- 10.0.0.2 ping statistics --- 00:12:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.133 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:12:29.133 00:12:29.133 --- 10.0.0.1 ping statistics --- 00:12:29.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.133 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=182599 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 182599 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 182599 ']' 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.133 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.133 [2024-12-07 00:39:45.207392] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:29.133 [2024-12-07 00:39:45.207472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.133 [2024-12-07 00:39:45.281676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.394 [2024-12-07 00:39:45.329580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.394 [2024-12-07 00:39:45.329634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.394 [2024-12-07 00:39:45.329649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.394 [2024-12-07 00:39:45.329660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.394 [2024-12-07 00:39:45.329670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.394 [2024-12-07 00:39:45.331458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.394 [2024-12-07 00:39:45.331524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.394 [2024-12-07 00:39:45.331589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.394 [2024-12-07 00:39:45.331591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 [2024-12-07 00:39:45.480004] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 Null1 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 [2024-12-07 00:39:45.536175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.394 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 Null2 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 Null3 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:29.652 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 Null4 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.653 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:29.911 00:12:29.911 Discovery Log Number of Records 6, Generation counter 6 00:12:29.911 =====Discovery Log Entry 0====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: current discovery subsystem 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4420 00:12:29.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: explicit discovery connections, duplicate discovery information 00:12:29.911 sectype: none 00:12:29.911 =====Discovery Log Entry 1====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: nvme subsystem 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4420 00:12:29.911 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: none 00:12:29.911 sectype: none 00:12:29.911 =====Discovery Log Entry 2====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: nvme subsystem 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4420 00:12:29.911 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: none 00:12:29.911 sectype: none 00:12:29.911 =====Discovery Log Entry 3====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: nvme subsystem 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4420 00:12:29.911 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: none 00:12:29.911 sectype: none 00:12:29.911 =====Discovery Log Entry 4====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: nvme subsystem 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4420 00:12:29.911 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: none 00:12:29.911 sectype: none 00:12:29.911 =====Discovery Log Entry 5====== 00:12:29.911 trtype: tcp 00:12:29.911 adrfam: ipv4 00:12:29.911 subtype: discovery subsystem referral 00:12:29.911 treq: not required 00:12:29.911 portid: 0 00:12:29.911 trsvcid: 4430 00:12:29.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:29.911 traddr: 10.0.0.2 00:12:29.911 eflags: none 00:12:29.911 sectype: none 00:12:29.911 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:29.911 Perform nvmf subsystem discovery via RPC 00:12:29.911 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 [ 00:12:29.912 { 00:12:29.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.912 "subtype": "Discovery", 00:12:29.912 "listen_addresses": [ 00:12:29.912 { 00:12:29.912 "trtype": "TCP", 00:12:29.912 "adrfam": "IPv4", 00:12:29.912 "traddr": "10.0.0.2", 00:12:29.912 "trsvcid": "4420" 00:12:29.912 } 00:12:29.912 ], 00:12:29.912 "allow_any_host": true, 00:12:29.912 "hosts": [] 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.912 "subtype": "NVMe", 00:12:29.912 "listen_addresses": [ 00:12:29.912 { 00:12:29.912 "trtype": "TCP", 00:12:29.912 "adrfam": "IPv4", 00:12:29.912 "traddr": "10.0.0.2", 00:12:29.912 "trsvcid": "4420" 00:12:29.912 } 00:12:29.912 ], 00:12:29.912 "allow_any_host": true, 00:12:29.912 "hosts": [], 00:12:29.912 "serial_number": "SPDK00000000000001", 00:12:29.912 "model_number": "SPDK bdev Controller", 00:12:29.912 "max_namespaces": 32, 00:12:29.912 "min_cntlid": 1, 00:12:29.912 "max_cntlid": 65519, 00:12:29.912 "namespaces": [ 00:12:29.912 { 00:12:29.912 "nsid": 1, 00:12:29.912 "bdev_name": "Null1", 00:12:29.912 "name": "Null1", 00:12:29.912 "nguid": "8D58507982D341A8B38C21667CCBE5FF", 00:12:29.912 "uuid": "8d585079-82d3-41a8-b38c-21667ccbe5ff" 00:12:29.912 } 00:12:29.912 ] 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:29.912 "subtype": "NVMe", 00:12:29.912 "listen_addresses": [ 00:12:29.912 { 00:12:29.912 "trtype": "TCP", 00:12:29.912 "adrfam": "IPv4", 00:12:29.912 "traddr": "10.0.0.2", 00:12:29.912 "trsvcid": "4420" 00:12:29.912 } 00:12:29.912 ], 00:12:29.912 "allow_any_host": true, 00:12:29.912 "hosts": [], 00:12:29.912 "serial_number": "SPDK00000000000002", 00:12:29.912 "model_number": "SPDK bdev Controller", 00:12:29.912 "max_namespaces": 32, 00:12:29.912 "min_cntlid": 1, 00:12:29.912 "max_cntlid": 65519, 00:12:29.912 "namespaces": [ 00:12:29.912 { 00:12:29.912 "nsid": 1, 00:12:29.912 "bdev_name": "Null2", 00:12:29.912 "name": "Null2", 00:12:29.912 "nguid": "7AFAB2A10C04488A833422D3528A963B", 00:12:29.912 "uuid": "7afab2a1-0c04-488a-8334-22d3528a963b" 00:12:29.912 } 00:12:29.912 ] 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:29.912 "subtype": "NVMe", 00:12:29.912 "listen_addresses": [ 00:12:29.912 { 00:12:29.912 "trtype": "TCP", 00:12:29.912 "adrfam": "IPv4", 00:12:29.912 "traddr": "10.0.0.2", 00:12:29.912 "trsvcid": "4420" 00:12:29.912 } 00:12:29.912 ], 00:12:29.912 "allow_any_host": true, 00:12:29.912 "hosts": [], 00:12:29.912 "serial_number": "SPDK00000000000003", 00:12:29.912 "model_number": "SPDK bdev Controller", 00:12:29.912 "max_namespaces": 32, 00:12:29.912 "min_cntlid": 1, 00:12:29.912 "max_cntlid": 65519, 00:12:29.912 "namespaces": [ 00:12:29.912 { 00:12:29.912 "nsid": 1, 00:12:29.912 "bdev_name": "Null3", 00:12:29.912 "name": "Null3", 00:12:29.912 "nguid": "BE9D08930DFA424B87261FABBA19A7D1", 00:12:29.912 "uuid": "be9d0893-0dfa-424b-8726-1fabba19a7d1" 00:12:29.912 } 00:12:29.912 ] 00:12:29.912 }, 00:12:29.912 { 00:12:29.912 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:29.912 "subtype": "NVMe", 00:12:29.912 "listen_addresses": [ 00:12:29.912 { 00:12:29.912 "trtype": "TCP", 00:12:29.912 "adrfam": "IPv4", 00:12:29.912 "traddr": "10.0.0.2", 00:12:29.912 "trsvcid": "4420" 00:12:29.912 } 00:12:29.912 ], 00:12:29.912 "allow_any_host": true, 00:12:29.912 "hosts": [], 00:12:29.912 "serial_number": "SPDK00000000000004", 00:12:29.912 "model_number": "SPDK bdev Controller", 00:12:29.912 "max_namespaces": 32, 00:12:29.912 "min_cntlid": 1, 00:12:29.912 "max_cntlid": 65519, 00:12:29.912 "namespaces": [ 00:12:29.912 { 00:12:29.912 "nsid": 1, 00:12:29.912 "bdev_name": "Null4", 00:12:29.912 "name": "Null4", 00:12:29.912 "nguid": "9B94EBFF6EF7428ABA47C4B1592E2E47", 00:12:29.912 "uuid": "9b94ebff-6ef7-428a-ba47-c4b1592e2e47" 00:12:29.912 } 00:12:29.912 ] 00:12:29.912 } 00:12:29.912 ] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:29.912 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.913 00:39:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.913 rmmod nvme_tcp 00:12:29.913 rmmod nvme_fabrics 00:12:29.913 rmmod nvme_keyring 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 182599 ']' 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 182599 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 182599 ']' 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 182599 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 182599 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 182599' 00:12:29.913 killing process with pid 182599 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 182599 00:12:29.913 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 182599 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.171 00:39:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.709 00:12:32.709 real 0m5.768s 00:12:32.709 user 0m4.763s 00:12:32.709 sys 0m2.084s 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.709 ************************************ 00:12:32.709 END TEST nvmf_target_discovery 00:12:32.709 ************************************ 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.709 ************************************ 00:12:32.709 START TEST nvmf_referrals 00:12:32.709 ************************************ 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:32.709 * Looking for test storage... 00:12:32.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.709 --rc genhtml_branch_coverage=1 00:12:32.709 --rc genhtml_function_coverage=1 00:12:32.709 --rc genhtml_legend=1 00:12:32.709 --rc geninfo_all_blocks=1 00:12:32.709 --rc geninfo_unexecuted_blocks=1 00:12:32.709 00:12:32.709 ' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.709 --rc genhtml_branch_coverage=1 00:12:32.709 --rc genhtml_function_coverage=1 00:12:32.709 --rc genhtml_legend=1 00:12:32.709 --rc geninfo_all_blocks=1 00:12:32.709 --rc geninfo_unexecuted_blocks=1 00:12:32.709 00:12:32.709 ' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.709 --rc genhtml_branch_coverage=1 00:12:32.709 --rc genhtml_function_coverage=1 00:12:32.709 --rc genhtml_legend=1 00:12:32.709 --rc geninfo_all_blocks=1 00:12:32.709 --rc geninfo_unexecuted_blocks=1 00:12:32.709 00:12:32.709 ' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.709 --rc genhtml_branch_coverage=1 00:12:32.709 --rc genhtml_function_coverage=1 00:12:32.709 --rc genhtml_legend=1 00:12:32.709 --rc geninfo_all_blocks=1 00:12:32.709 --rc geninfo_unexecuted_blocks=1 00:12:32.709 00:12:32.709 ' 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.709 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.710 00:39:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.625 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:12:34.884 00:12:34.884 --- 10.0.0.2 ping statistics --- 00:12:34.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.884 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:12:34.884 00:12:34.884 --- 10.0.0.1 ping statistics --- 00:12:34.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.884 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=184702 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 184702 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 184702 ']' 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.884 00:39:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.884 [2024-12-07 00:39:50.943272] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:34.884 [2024-12-07 00:39:50.943360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.884 [2024-12-07 00:39:51.015528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.142 [2024-12-07 00:39:51.063520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.142 [2024-12-07 00:39:51.063572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.142 [2024-12-07 00:39:51.063597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.142 [2024-12-07 00:39:51.063610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.142 [2024-12-07 00:39:51.063620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.142 [2024-12-07 00:39:51.065324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.142 [2024-12-07 00:39:51.065399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.142 [2024-12-07 00:39:51.065464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.142 [2024-12-07 00:39:51.065467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 [2024-12-07 00:39:51.220079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 [2024-12-07 00:39:51.245221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.142 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.143 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.399 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.656 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.913 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.914 00:39:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.171 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:36.428 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.685 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.942 00:39:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.942 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:36.942 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:36.942 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:36.942 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:36.943 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.943 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:36.943 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.200 rmmod nvme_tcp 00:12:37.200 rmmod nvme_fabrics 00:12:37.200 rmmod nvme_keyring 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 184702 ']' 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 184702 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 184702 ']' 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 184702 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.200 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184702 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184702' 00:12:37.459 killing process with pid 184702 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 184702 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 184702 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.459 00:39:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.998 00:12:39.998 real 0m7.229s 00:12:39.998 user 0m11.413s 00:12:39.998 sys 0m2.412s 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.998 ************************************ 00:12:39.998 END TEST nvmf_referrals 00:12:39.998 ************************************ 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.998 ************************************ 00:12:39.998 START TEST nvmf_connect_disconnect 00:12:39.998 ************************************ 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.998 * Looking for test storage... 00:12:39.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.998 --rc genhtml_branch_coverage=1 00:12:39.998 --rc genhtml_function_coverage=1 00:12:39.998 --rc genhtml_legend=1 00:12:39.998 --rc geninfo_all_blocks=1 00:12:39.998 --rc geninfo_unexecuted_blocks=1 00:12:39.998 00:12:39.998 ' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.998 --rc genhtml_branch_coverage=1 00:12:39.998 --rc genhtml_function_coverage=1 00:12:39.998 --rc genhtml_legend=1 00:12:39.998 --rc geninfo_all_blocks=1 00:12:39.998 --rc geninfo_unexecuted_blocks=1 00:12:39.998 00:12:39.998 ' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.998 --rc genhtml_branch_coverage=1 00:12:39.998 --rc genhtml_function_coverage=1 00:12:39.998 --rc genhtml_legend=1 00:12:39.998 --rc geninfo_all_blocks=1 00:12:39.998 --rc geninfo_unexecuted_blocks=1 00:12:39.998 00:12:39.998 ' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.998 --rc genhtml_branch_coverage=1 00:12:39.998 --rc genhtml_function_coverage=1 00:12:39.998 --rc genhtml_legend=1 00:12:39.998 --rc geninfo_all_blocks=1 00:12:39.998 --rc geninfo_unexecuted_blocks=1 00:12:39.998 00:12:39.998 ' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.998 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.999 00:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.902 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:12:42.162 00:12:42.162 --- 10.0.0.2 ping statistics --- 00:12:42.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.162 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:12:42.162 00:12:42.162 --- 10.0.0.1 ping statistics --- 00:12:42.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.162 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=187007 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 187007 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 187007 ']' 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.162 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.162 [2024-12-07 00:39:58.251421] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:42.162 [2024-12-07 00:39:58.251529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.421 [2024-12-07 00:39:58.324640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.421 [2024-12-07 00:39:58.368263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.421 [2024-12-07 00:39:58.368320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.421 [2024-12-07 00:39:58.368343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.421 [2024-12-07 00:39:58.368353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.421 [2024-12-07 00:39:58.368362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.421 [2024-12-07 00:39:58.369938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.421 [2024-12-07 00:39:58.370044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.421 [2024-12-07 00:39:58.370069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.421 [2024-12-07 00:39:58.370072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.421 [2024-12-07 00:39:58.511052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.421 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.679 [2024-12-07 00:39:58.577363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:42.679 00:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:45.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.521 [2024-12-07 00:43:28.299742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a37840 is same with the state(6) to be set 00:16:12.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:33.506 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:33.507 rmmod nvme_tcp 00:16:33.507 rmmod nvme_fabrics 00:16:33.507 rmmod nvme_keyring 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 187007 ']' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 187007 ']' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187007' 00:16:33.507 killing process with pid 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 187007 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.507 00:43:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:35.422 00:16:35.422 real 3m55.869s 00:16:35.422 user 14m58.831s 00:16:35.422 sys 0m34.741s 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 ************************************ 00:16:35.422 END TEST nvmf_connect_disconnect 00:16:35.422 ************************************ 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.422 ************************************ 00:16:35.422 START TEST nvmf_multitarget 00:16:35.422 ************************************ 00:16:35.422 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:35.683 * Looking for test storage... 00:16:35.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.683 --rc genhtml_branch_coverage=1 00:16:35.683 --rc genhtml_function_coverage=1 00:16:35.683 --rc genhtml_legend=1 00:16:35.683 --rc geninfo_all_blocks=1 00:16:35.683 --rc geninfo_unexecuted_blocks=1 00:16:35.683 00:16:35.683 ' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.683 --rc genhtml_branch_coverage=1 00:16:35.683 --rc genhtml_function_coverage=1 00:16:35.683 --rc genhtml_legend=1 00:16:35.683 --rc geninfo_all_blocks=1 00:16:35.683 --rc geninfo_unexecuted_blocks=1 00:16:35.683 00:16:35.683 ' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.683 --rc genhtml_branch_coverage=1 00:16:35.683 --rc genhtml_function_coverage=1 00:16:35.683 --rc genhtml_legend=1 00:16:35.683 --rc geninfo_all_blocks=1 00:16:35.683 --rc geninfo_unexecuted_blocks=1 00:16:35.683 00:16:35.683 ' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.683 --rc genhtml_branch_coverage=1 00:16:35.683 --rc genhtml_function_coverage=1 00:16:35.683 --rc genhtml_legend=1 00:16:35.683 --rc geninfo_all_blocks=1 00:16:35.683 --rc geninfo_unexecuted_blocks=1 00:16:35.683 00:16:35.683 ' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.683 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:35.684 00:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:38.226 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:38.226 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:38.226 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:38.226 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.226 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:38.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:16:38.227 00:16:38.227 --- 10.0.0.2 ping statistics --- 00:16:38.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.227 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:16:38.227 00:43:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:16:38.227 00:16:38.227 --- 10.0.0.1 ping statistics --- 00:16:38.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.227 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=218014 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 218014 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 218014 ']' 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.227 [2024-12-07 00:43:54.095909] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:16:38.227 [2024-12-07 00:43:54.096018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.227 [2024-12-07 00:43:54.169441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.227 [2024-12-07 00:43:54.212818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.227 [2024-12-07 00:43:54.212889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.227 [2024-12-07 00:43:54.212902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.227 [2024-12-07 00:43:54.212927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.227 [2024-12-07 00:43:54.212937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.227 [2024-12-07 00:43:54.214600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.227 [2024-12-07 00:43:54.214705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.227 [2024-12-07 00:43:54.214801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.227 [2024-12-07 00:43:54.214804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:38.227 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:38.486 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:38.486 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:38.486 "nvmf_tgt_1" 00:16:38.486 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:38.746 "nvmf_tgt_2" 00:16:38.746 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:38.746 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:38.746 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:38.746 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:39.005 true 00:16:39.005 00:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:39.005 true 00:16:39.005 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:39.005 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.265 rmmod nvme_tcp 00:16:39.265 rmmod nvme_fabrics 00:16:39.265 rmmod nvme_keyring 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 218014 ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 218014 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 218014 ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 218014 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218014 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218014' 00:16:39.265 killing process with pid 218014 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 218014 00:16:39.265 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 218014 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.525 00:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:41.436 00:16:41.436 real 0m5.948s 00:16:41.436 user 0m6.650s 00:16:41.436 sys 0m2.054s 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 ************************************ 00:16:41.436 END TEST nvmf_multitarget 00:16:41.436 ************************************ 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 ************************************ 00:16:41.436 START TEST nvmf_rpc 00:16:41.436 ************************************ 00:16:41.436 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:41.696 * Looking for test storage... 00:16:41.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.696 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.697 --rc genhtml_branch_coverage=1 00:16:41.697 --rc genhtml_function_coverage=1 00:16:41.697 --rc genhtml_legend=1 00:16:41.697 --rc geninfo_all_blocks=1 00:16:41.697 --rc geninfo_unexecuted_blocks=1 00:16:41.697 00:16:41.697 ' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.697 --rc genhtml_branch_coverage=1 00:16:41.697 --rc genhtml_function_coverage=1 00:16:41.697 --rc genhtml_legend=1 00:16:41.697 --rc geninfo_all_blocks=1 00:16:41.697 --rc geninfo_unexecuted_blocks=1 00:16:41.697 00:16:41.697 ' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.697 --rc genhtml_branch_coverage=1 00:16:41.697 --rc genhtml_function_coverage=1 00:16:41.697 --rc genhtml_legend=1 00:16:41.697 --rc geninfo_all_blocks=1 00:16:41.697 --rc geninfo_unexecuted_blocks=1 00:16:41.697 00:16:41.697 ' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.697 --rc genhtml_branch_coverage=1 00:16:41.697 --rc genhtml_function_coverage=1 00:16:41.697 --rc genhtml_legend=1 00:16:41.697 --rc geninfo_all_blocks=1 00:16:41.697 --rc geninfo_unexecuted_blocks=1 00:16:41.697 00:16:41.697 ' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:41.697 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:41.698 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:41.698 00:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.237 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.237 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:44.238 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:44.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:16:44.239 00:16:44.239 --- 10.0.0.2 ping statistics --- 00:16:44.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.239 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:44.239 00:16:44.239 --- 10.0.0.1 ping statistics --- 00:16:44.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.239 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.239 00:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=220223 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 220223 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 220223 ']' 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.239 [2024-12-07 00:44:00.061587] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:16:44.239 [2024-12-07 00:44:00.061679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.239 [2024-12-07 00:44:00.142559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.239 [2024-12-07 00:44:00.194531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.239 [2024-12-07 00:44:00.194585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.239 [2024-12-07 00:44:00.194614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.239 [2024-12-07 00:44:00.194625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.239 [2024-12-07 00:44:00.194635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.239 [2024-12-07 00:44:00.196256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.239 [2024-12-07 00:44:00.196335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.239 [2024-12-07 00:44:00.196397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.239 [2024-12-07 00:44:00.196401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:44.239 "tick_rate": 2700000000, 00:16:44.239 "poll_groups": [ 00:16:44.239 { 00:16:44.239 "name": "nvmf_tgt_poll_group_000", 00:16:44.239 "admin_qpairs": 0, 00:16:44.239 "io_qpairs": 0, 00:16:44.239 "current_admin_qpairs": 0, 00:16:44.239 "current_io_qpairs": 0, 00:16:44.239 "pending_bdev_io": 0, 00:16:44.239 "completed_nvme_io": 0, 00:16:44.239 "transports": [] 00:16:44.239 }, 00:16:44.239 { 00:16:44.239 "name": "nvmf_tgt_poll_group_001", 00:16:44.239 "admin_qpairs": 0, 00:16:44.239 "io_qpairs": 0, 00:16:44.239 "current_admin_qpairs": 0, 00:16:44.239 "current_io_qpairs": 0, 00:16:44.239 "pending_bdev_io": 0, 00:16:44.239 "completed_nvme_io": 0, 00:16:44.239 "transports": [] 00:16:44.239 }, 00:16:44.239 { 00:16:44.239 "name": "nvmf_tgt_poll_group_002", 00:16:44.239 "admin_qpairs": 0, 00:16:44.239 "io_qpairs": 0, 00:16:44.239 "current_admin_qpairs": 0, 00:16:44.239 "current_io_qpairs": 0, 00:16:44.239 "pending_bdev_io": 0, 00:16:44.239 "completed_nvme_io": 0, 00:16:44.239 "transports": [] 00:16:44.239 }, 00:16:44.239 { 00:16:44.239 "name": "nvmf_tgt_poll_group_003", 00:16:44.239 "admin_qpairs": 0, 00:16:44.239 "io_qpairs": 0, 00:16:44.239 "current_admin_qpairs": 0, 00:16:44.239 "current_io_qpairs": 0, 00:16:44.239 "pending_bdev_io": 0, 00:16:44.239 "completed_nvme_io": 0, 00:16:44.239 "transports": [] 00:16:44.239 } 00:16:44.239 ] 00:16:44.239 }' 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:44.239 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:44.240 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:44.240 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.500 [2024-12-07 00:44:00.435065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:44.500 "tick_rate": 2700000000, 00:16:44.500 "poll_groups": [ 00:16:44.500 { 00:16:44.500 "name": "nvmf_tgt_poll_group_000", 00:16:44.500 "admin_qpairs": 0, 00:16:44.500 "io_qpairs": 0, 00:16:44.500 "current_admin_qpairs": 0, 00:16:44.500 "current_io_qpairs": 0, 00:16:44.500 "pending_bdev_io": 0, 00:16:44.500 "completed_nvme_io": 0, 00:16:44.500 "transports": [ 00:16:44.500 { 00:16:44.500 "trtype": "TCP" 00:16:44.500 } 00:16:44.500 ] 00:16:44.500 }, 00:16:44.500 { 00:16:44.500 "name": "nvmf_tgt_poll_group_001", 00:16:44.500 "admin_qpairs": 0, 00:16:44.500 "io_qpairs": 0, 00:16:44.500 "current_admin_qpairs": 0, 00:16:44.500 "current_io_qpairs": 0, 00:16:44.500 "pending_bdev_io": 0, 00:16:44.500 "completed_nvme_io": 0, 00:16:44.500 "transports": [ 00:16:44.500 { 00:16:44.500 "trtype": "TCP" 00:16:44.500 } 00:16:44.500 ] 00:16:44.500 }, 00:16:44.500 { 00:16:44.500 "name": "nvmf_tgt_poll_group_002", 00:16:44.500 "admin_qpairs": 0, 00:16:44.500 "io_qpairs": 0, 00:16:44.500 "current_admin_qpairs": 0, 00:16:44.500 "current_io_qpairs": 0, 00:16:44.500 "pending_bdev_io": 0, 00:16:44.500 "completed_nvme_io": 0, 00:16:44.500 "transports": [ 00:16:44.500 { 00:16:44.500 "trtype": "TCP" 00:16:44.500 } 00:16:44.500 ] 00:16:44.500 }, 00:16:44.500 { 00:16:44.500 "name": "nvmf_tgt_poll_group_003", 00:16:44.500 "admin_qpairs": 0, 00:16:44.500 "io_qpairs": 0, 00:16:44.500 "current_admin_qpairs": 0, 00:16:44.500 "current_io_qpairs": 0, 00:16:44.500 "pending_bdev_io": 0, 00:16:44.500 "completed_nvme_io": 0, 00:16:44.500 "transports": [ 00:16:44.500 { 00:16:44.500 "trtype": "TCP" 00:16:44.500 } 00:16:44.500 ] 00:16:44.500 } 00:16:44.500 ] 00:16:44.500 }' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:44.500 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 Malloc1 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.501 [2024-12-07 00:44:00.586810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:44.501 [2024-12-07 00:44:00.609321] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:44.501 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:44.501 could not add new controller: failed to write to nvme-fabrics device 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.501 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.760 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.760 00:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.329 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.329 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:45.329 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.329 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:45.329 00:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:47.241 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.498 [2024-12-07 00:44:03.509045] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:47.498 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:47.498 could not add new controller: failed to write to nvme-fabrics device 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.498 00:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.071 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.071 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.071 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.071 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.071 00:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 [2024-12-07 00:44:06.319892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.613 00:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.872 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.872 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.872 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.872 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.872 00:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 [2024-12-07 00:44:09.190354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.416 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.989 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.989 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.989 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.989 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.989 00:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.903 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.904 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.904 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.904 00:44:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 [2024-12-07 00:44:12.037139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.904 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.164 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.164 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.734 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:56.734 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:56.734 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.734 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:56.734 00:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:58.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.650 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 [2024-12-07 00:44:14.813887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.911 00:44:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:59.495 00:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:59.495 00:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:59.495 00:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.495 00:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:59.495 00:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:01.405 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 [2024-12-07 00:44:17.613240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.667 00:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.240 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.240 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:02.240 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.240 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:02.240 00:44:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:04.155 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.414 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 [2024-12-07 00:44:20.411766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 [2024-12-07 00:44:20.459838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 [2024-12-07 00:44:20.508012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.415 [2024-12-07 00:44:20.556183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.415 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 [2024-12-07 00:44:20.604358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:04.675 "tick_rate": 2700000000, 00:17:04.675 "poll_groups": [ 00:17:04.675 { 00:17:04.675 "name": "nvmf_tgt_poll_group_000", 00:17:04.675 "admin_qpairs": 2, 00:17:04.675 "io_qpairs": 84, 00:17:04.675 "current_admin_qpairs": 0, 00:17:04.675 "current_io_qpairs": 0, 00:17:04.675 "pending_bdev_io": 0, 00:17:04.675 "completed_nvme_io": 159, 00:17:04.675 "transports": [ 00:17:04.675 { 00:17:04.675 "trtype": "TCP" 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "nvmf_tgt_poll_group_001", 00:17:04.675 "admin_qpairs": 2, 00:17:04.675 "io_qpairs": 84, 00:17:04.675 "current_admin_qpairs": 0, 00:17:04.675 "current_io_qpairs": 0, 00:17:04.675 "pending_bdev_io": 0, 00:17:04.675 "completed_nvme_io": 175, 00:17:04.675 "transports": [ 00:17:04.675 { 00:17:04.675 "trtype": "TCP" 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "nvmf_tgt_poll_group_002", 00:17:04.675 "admin_qpairs": 1, 00:17:04.675 "io_qpairs": 84, 00:17:04.675 "current_admin_qpairs": 0, 00:17:04.675 "current_io_qpairs": 0, 00:17:04.675 "pending_bdev_io": 0, 00:17:04.675 "completed_nvme_io": 121, 00:17:04.675 "transports": [ 00:17:04.675 { 00:17:04.675 "trtype": "TCP" 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 }, 00:17:04.675 { 00:17:04.675 "name": "nvmf_tgt_poll_group_003", 00:17:04.675 "admin_qpairs": 2, 00:17:04.675 "io_qpairs": 84, 00:17:04.675 "current_admin_qpairs": 0, 00:17:04.675 "current_io_qpairs": 0, 00:17:04.675 "pending_bdev_io": 0, 00:17:04.675 "completed_nvme_io": 231, 00:17:04.675 "transports": [ 00:17:04.675 { 00:17:04.675 "trtype": "TCP" 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 } 00:17:04.675 ] 00:17:04.675 }' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.675 rmmod nvme_tcp 00:17:04.675 rmmod nvme_fabrics 00:17:04.675 rmmod nvme_keyring 00:17:04.675 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 220223 ']' 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 220223 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 220223 ']' 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 220223 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.676 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220223 00:17:04.934 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.934 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.934 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220223' 00:17:04.934 killing process with pid 220223 00:17:04.934 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 220223 00:17:04.934 00:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 220223 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.228 00:44:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.135 00:17:07.135 real 0m25.592s 00:17:07.135 user 1m22.836s 00:17:07.135 sys 0m4.389s 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.135 ************************************ 00:17:07.135 END TEST nvmf_rpc 00:17:07.135 ************************************ 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.135 ************************************ 00:17:07.135 START TEST nvmf_invalid 00:17:07.135 ************************************ 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:07.135 * Looking for test storage... 00:17:07.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.135 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.397 --rc genhtml_branch_coverage=1 00:17:07.397 --rc genhtml_function_coverage=1 00:17:07.397 --rc genhtml_legend=1 00:17:07.397 --rc geninfo_all_blocks=1 00:17:07.397 --rc geninfo_unexecuted_blocks=1 00:17:07.397 00:17:07.397 ' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.397 --rc genhtml_branch_coverage=1 00:17:07.397 --rc genhtml_function_coverage=1 00:17:07.397 --rc genhtml_legend=1 00:17:07.397 --rc geninfo_all_blocks=1 00:17:07.397 --rc geninfo_unexecuted_blocks=1 00:17:07.397 00:17:07.397 ' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.397 --rc genhtml_branch_coverage=1 00:17:07.397 --rc genhtml_function_coverage=1 00:17:07.397 --rc genhtml_legend=1 00:17:07.397 --rc geninfo_all_blocks=1 00:17:07.397 --rc geninfo_unexecuted_blocks=1 00:17:07.397 00:17:07.397 ' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.397 --rc genhtml_branch_coverage=1 00:17:07.397 --rc genhtml_function_coverage=1 00:17:07.397 --rc genhtml_legend=1 00:17:07.397 --rc geninfo_all_blocks=1 00:17:07.397 --rc geninfo_unexecuted_blocks=1 00:17:07.397 00:17:07.397 ' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.397 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.398 00:44:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.940 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:17:09.941 00:17:09.941 --- 10.0.0.2 ping statistics --- 00:17:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.941 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:17:09.941 00:17:09.941 --- 10.0.0.1 ping statistics --- 00:17:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.941 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=224725 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 224725 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 224725 ']' 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.941 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.942 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.942 00:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.942 [2024-12-07 00:44:25.788308] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:09.942 [2024-12-07 00:44:25.788387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.942 [2024-12-07 00:44:25.862596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.942 [2024-12-07 00:44:25.908706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.942 [2024-12-07 00:44:25.908770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.942 [2024-12-07 00:44:25.908785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.942 [2024-12-07 00:44:25.908803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.942 [2024-12-07 00:44:25.908827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.942 [2024-12-07 00:44:25.910261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.942 [2024-12-07 00:44:25.910387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.942 [2024-12-07 00:44:25.910454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.942 [2024-12-07 00:44:25.910458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:09.942 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13762 00:17:10.203 [2024-12-07 00:44:26.317356] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:10.203 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:10.203 { 00:17:10.203 "nqn": "nqn.2016-06.io.spdk:cnode13762", 00:17:10.203 "tgt_name": "foobar", 00:17:10.203 "method": "nvmf_create_subsystem", 00:17:10.203 "req_id": 1 00:17:10.203 } 00:17:10.203 Got JSON-RPC error response 00:17:10.203 response: 00:17:10.203 { 00:17:10.203 "code": -32603, 00:17:10.203 "message": "Unable to find target foobar" 00:17:10.203 }' 00:17:10.203 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:10.203 { 00:17:10.203 "nqn": "nqn.2016-06.io.spdk:cnode13762", 00:17:10.203 "tgt_name": "foobar", 00:17:10.203 "method": "nvmf_create_subsystem", 00:17:10.203 "req_id": 1 00:17:10.203 } 00:17:10.203 Got JSON-RPC error response 00:17:10.203 response: 00:17:10.203 { 00:17:10.203 "code": -32603, 00:17:10.203 "message": "Unable to find target foobar" 00:17:10.203 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:10.203 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:10.203 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18844 00:17:10.771 [2024-12-07 00:44:26.614337] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18844: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:10.771 { 00:17:10.771 "nqn": "nqn.2016-06.io.spdk:cnode18844", 00:17:10.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:10.771 "method": "nvmf_create_subsystem", 00:17:10.771 "req_id": 1 00:17:10.771 } 00:17:10.771 Got JSON-RPC error response 00:17:10.771 response: 00:17:10.771 { 00:17:10.771 "code": -32602, 00:17:10.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:10.771 }' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:10.771 { 00:17:10.771 "nqn": "nqn.2016-06.io.spdk:cnode18844", 00:17:10.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:10.771 "method": "nvmf_create_subsystem", 00:17:10.771 "req_id": 1 00:17:10.771 } 00:17:10.771 Got JSON-RPC error response 00:17:10.771 response: 00:17:10.771 { 00:17:10.771 "code": -32602, 00:17:10.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:10.771 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15099 00:17:10.771 [2024-12-07 00:44:26.887196] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15099: invalid model number 'SPDK_Controller' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:10.771 { 00:17:10.771 "nqn": "nqn.2016-06.io.spdk:cnode15099", 00:17:10.771 "model_number": "SPDK_Controller\u001f", 00:17:10.771 "method": "nvmf_create_subsystem", 00:17:10.771 "req_id": 1 00:17:10.771 } 00:17:10.771 Got JSON-RPC error response 00:17:10.771 response: 00:17:10.771 { 00:17:10.771 "code": -32602, 00:17:10.771 "message": "Invalid MN SPDK_Controller\u001f" 00:17:10.771 }' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:10.771 { 00:17:10.771 "nqn": "nqn.2016-06.io.spdk:cnode15099", 00:17:10.771 "model_number": "SPDK_Controller\u001f", 00:17:10.771 "method": "nvmf_create_subsystem", 00:17:10.771 "req_id": 1 00:17:10.771 } 00:17:10.771 Got JSON-RPC error response 00:17:10.771 response: 00:17:10.771 { 00:17:10.771 "code": -32602, 00:17:10.771 "message": "Invalid MN SPDK_Controller\u001f" 00:17:10.771 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.771 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.031 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vQ(NWsON-8{2.r/?{X`o' 00:17:11.032 00:44:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vQ(NWsON-8{2.r/?{X`o' nqn.2016-06.io.spdk:cnode16633 00:17:11.293 [2024-12-07 00:44:27.292539] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16633: invalid serial number 'vQ(NWsON-8{2.r/?{X`o' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:11.293 { 00:17:11.293 "nqn": "nqn.2016-06.io.spdk:cnode16633", 00:17:11.293 "serial_number": "vQ(NWsON-8{2.r/?{X`\u007fo", 00:17:11.293 "method": "nvmf_create_subsystem", 00:17:11.293 "req_id": 1 00:17:11.293 } 00:17:11.293 Got JSON-RPC error response 00:17:11.293 response: 00:17:11.293 { 00:17:11.293 "code": -32602, 00:17:11.293 "message": "Invalid SN vQ(NWsON-8{2.r/?{X`\u007fo" 00:17:11.293 }' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:11.293 { 00:17:11.293 "nqn": "nqn.2016-06.io.spdk:cnode16633", 00:17:11.293 "serial_number": "vQ(NWsON-8{2.r/?{X`\u007fo", 00:17:11.293 "method": "nvmf_create_subsystem", 00:17:11.293 "req_id": 1 00:17:11.293 } 00:17:11.293 Got JSON-RPC error response 00:17:11.293 response: 00:17:11.293 { 00:17:11.293 "code": -32602, 00:17:11.293 "message": "Invalid SN vQ(NWsON-8{2.r/?{X`\u007fo" 00:17:11.293 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.293 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:17:11.294 00:44:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd+\sdWurn8i@ /dev/null' 00:17:14.423 00:44:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:16.410 00:17:16.410 real 0m9.177s 00:17:16.410 user 0m21.745s 00:17:16.410 sys 0m2.616s 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:16.410 ************************************ 00:17:16.410 END TEST nvmf_invalid 00:17:16.410 ************************************ 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.410 ************************************ 00:17:16.410 START TEST nvmf_connect_stress 00:17:16.410 ************************************ 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:16.410 * Looking for test storage... 00:17:16.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.410 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.686 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.687 --rc genhtml_branch_coverage=1 00:17:16.687 --rc genhtml_function_coverage=1 00:17:16.687 --rc genhtml_legend=1 00:17:16.687 --rc geninfo_all_blocks=1 00:17:16.687 --rc geninfo_unexecuted_blocks=1 00:17:16.687 00:17:16.687 ' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.687 --rc genhtml_branch_coverage=1 00:17:16.687 --rc genhtml_function_coverage=1 00:17:16.687 --rc genhtml_legend=1 00:17:16.687 --rc geninfo_all_blocks=1 00:17:16.687 --rc geninfo_unexecuted_blocks=1 00:17:16.687 00:17:16.687 ' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.687 --rc genhtml_branch_coverage=1 00:17:16.687 --rc genhtml_function_coverage=1 00:17:16.687 --rc genhtml_legend=1 00:17:16.687 --rc geninfo_all_blocks=1 00:17:16.687 --rc geninfo_unexecuted_blocks=1 00:17:16.687 00:17:16.687 ' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.687 --rc genhtml_branch_coverage=1 00:17:16.687 --rc genhtml_function_coverage=1 00:17:16.687 --rc genhtml_legend=1 00:17:16.687 --rc geninfo_all_blocks=1 00:17:16.687 --rc geninfo_unexecuted_blocks=1 00:17:16.687 00:17:16.687 ' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:16.687 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.688 00:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.733 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.733 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.733 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.734 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.735 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:17:19.004 00:17:19.004 --- 10.0.0.2 ping statistics --- 00:17:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.004 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:19.004 00:17:19.004 --- 10.0.0.1 ping statistics --- 00:17:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.004 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=227396 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 227396 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 227396 ']' 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.004 00:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.004 [2024-12-07 00:44:35.049049] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:19.004 [2024-12-07 00:44:35.049142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.004 [2024-12-07 00:44:35.121935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.298 [2024-12-07 00:44:35.169082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.298 [2024-12-07 00:44:35.169130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.298 [2024-12-07 00:44:35.169158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.298 [2024-12-07 00:44:35.169169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.298 [2024-12-07 00:44:35.169178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.298 [2024-12-07 00:44:35.170539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.298 [2024-12-07 00:44:35.174015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.298 [2024-12-07 00:44:35.174027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 [2024-12-07 00:44:35.315768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 [2024-12-07 00:44:35.332844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 NULL1 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=227536 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.298 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.299 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.574 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.574 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:19.574 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.574 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.574 00:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.193 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.193 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:20.193 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.193 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.193 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.480 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.480 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:20.480 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.480 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.480 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.762 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.762 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:20.762 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.762 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.762 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.043 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.043 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:21.043 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.043 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.043 00:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.326 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.326 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:21.326 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.326 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.326 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.614 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.614 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:21.614 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.614 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.614 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.908 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.908 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:21.908 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.908 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.908 00:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.216 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.216 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:22.216 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.216 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.216 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.486 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.486 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:22.486 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.486 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.486 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.057 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.057 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:23.057 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.057 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.057 00:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.316 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.316 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:23.316 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.316 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.316 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.574 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.574 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:23.574 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.574 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.574 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.834 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.834 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:23.834 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.834 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.834 00:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.095 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.095 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:24.095 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.095 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.095 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.664 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.664 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:24.664 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.664 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.664 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.922 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.922 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:24.922 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.922 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.922 00:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.178 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.178 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:25.178 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.178 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.178 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.437 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.437 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:25.437 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.437 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.437 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.697 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.697 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:25.697 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.697 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.697 00:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.281 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.281 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:26.281 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.281 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.281 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.540 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.540 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:26.540 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.540 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.540 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.800 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.800 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:26.800 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.800 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.800 00:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.060 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.060 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:27.060 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.060 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.060 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.321 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.321 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:27.321 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.321 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.321 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.893 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.893 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:27.893 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.893 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.893 00:44:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.152 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.152 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:28.152 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.152 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.152 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.414 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.414 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:28.414 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.414 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.414 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.674 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.674 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:28.674 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.674 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.674 00:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.935 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.935 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:28.935 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.935 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.935 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.505 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.505 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:29.505 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.505 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.505 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.505 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 227536 00:17:29.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (227536) - No such process 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 227536 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.764 rmmod nvme_tcp 00:17:29.764 rmmod nvme_fabrics 00:17:29.764 rmmod nvme_keyring 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 227396 ']' 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 227396 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 227396 ']' 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 227396 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227396 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227396' 00:17:29.764 killing process with pid 227396 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 227396 00:17:29.764 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 227396 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.025 00:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.940 00:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.940 00:17:31.940 real 0m15.570s 00:17:31.940 user 0m40.071s 00:17:31.940 sys 0m4.752s 00:17:31.940 00:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.940 00:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.940 ************************************ 00:17:31.940 END TEST nvmf_connect_stress 00:17:31.940 ************************************ 00:17:31.940 00:44:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:31.940 00:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.940 00:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.940 00:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.940 ************************************ 00:17:31.940 START TEST nvmf_fused_ordering 00:17:31.940 ************************************ 00:17:31.940 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:32.203 * Looking for test storage... 00:17:32.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.203 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.204 --rc genhtml_branch_coverage=1 00:17:32.204 --rc genhtml_function_coverage=1 00:17:32.204 --rc genhtml_legend=1 00:17:32.204 --rc geninfo_all_blocks=1 00:17:32.204 --rc geninfo_unexecuted_blocks=1 00:17:32.204 00:17:32.204 ' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.204 --rc genhtml_branch_coverage=1 00:17:32.204 --rc genhtml_function_coverage=1 00:17:32.204 --rc genhtml_legend=1 00:17:32.204 --rc geninfo_all_blocks=1 00:17:32.204 --rc geninfo_unexecuted_blocks=1 00:17:32.204 00:17:32.204 ' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.204 --rc genhtml_branch_coverage=1 00:17:32.204 --rc genhtml_function_coverage=1 00:17:32.204 --rc genhtml_legend=1 00:17:32.204 --rc geninfo_all_blocks=1 00:17:32.204 --rc geninfo_unexecuted_blocks=1 00:17:32.204 00:17:32.204 ' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.204 --rc genhtml_branch_coverage=1 00:17:32.204 --rc genhtml_function_coverage=1 00:17:32.204 --rc genhtml_legend=1 00:17:32.204 --rc geninfo_all_blocks=1 00:17:32.204 --rc geninfo_unexecuted_blocks=1 00:17:32.204 00:17:32.204 ' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.204 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:32.205 00:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:34.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:34.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:34.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:34.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.749 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:34.750 00:17:34.750 --- 10.0.0.2 ping statistics --- 00:17:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.750 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:34.750 00:17:34.750 --- 10.0.0.1 ping statistics --- 00:17:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.750 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=230706 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 230706 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 230706 ']' 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:34.750 [2024-12-07 00:44:50.649897] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:34.750 [2024-12-07 00:44:50.649977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.750 [2024-12-07 00:44:50.723238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.750 [2024-12-07 00:44:50.770427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.750 [2024-12-07 00:44:50.770484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.750 [2024-12-07 00:44:50.770497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.750 [2024-12-07 00:44:50.770508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.750 [2024-12-07 00:44:50.770532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.750 [2024-12-07 00:44:50.771183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.750 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 [2024-12-07 00:44:50.920594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 [2024-12-07 00:44:50.936788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 NULL1 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.009 00:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:35.009 [2024-12-07 00:44:50.980414] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:35.009 [2024-12-07 00:44:50.980450] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230733 ] 00:17:35.270 Attached to nqn.2016-06.io.spdk:cnode1 00:17:35.270 Namespace ID: 1 size: 1GB 00:17:35.270 fused_ordering(0) 00:17:35.270 fused_ordering(1) 00:17:35.270 fused_ordering(2) 00:17:35.270 fused_ordering(3) 00:17:35.270 fused_ordering(4) 00:17:35.270 fused_ordering(5) 00:17:35.270 fused_ordering(6) 00:17:35.270 fused_ordering(7) 00:17:35.270 fused_ordering(8) 00:17:35.270 fused_ordering(9) 00:17:35.270 fused_ordering(10) 00:17:35.270 fused_ordering(11) 00:17:35.270 fused_ordering(12) 00:17:35.270 fused_ordering(13) 00:17:35.270 fused_ordering(14) 00:17:35.270 fused_ordering(15) 00:17:35.270 fused_ordering(16) 00:17:35.270 fused_ordering(17) 00:17:35.270 fused_ordering(18) 00:17:35.270 fused_ordering(19) 00:17:35.270 fused_ordering(20) 00:17:35.270 fused_ordering(21) 00:17:35.270 fused_ordering(22) 00:17:35.270 fused_ordering(23) 00:17:35.270 fused_ordering(24) 00:17:35.270 fused_ordering(25) 00:17:35.270 fused_ordering(26) 00:17:35.270 fused_ordering(27) 00:17:35.270 fused_ordering(28) 00:17:35.270 fused_ordering(29) 00:17:35.270 fused_ordering(30) 00:17:35.270 fused_ordering(31) 00:17:35.270 fused_ordering(32) 00:17:35.270 fused_ordering(33) 00:17:35.270 fused_ordering(34) 00:17:35.270 fused_ordering(35) 00:17:35.270 fused_ordering(36) 00:17:35.270 fused_ordering(37) 00:17:35.270 fused_ordering(38) 00:17:35.270 fused_ordering(39) 00:17:35.270 fused_ordering(40) 00:17:35.270 fused_ordering(41) 00:17:35.270 fused_ordering(42) 00:17:35.270 fused_ordering(43) 00:17:35.270 fused_ordering(44) 00:17:35.270 fused_ordering(45) 00:17:35.270 fused_ordering(46) 00:17:35.270 fused_ordering(47) 00:17:35.270 fused_ordering(48) 00:17:35.270 fused_ordering(49) 00:17:35.270 fused_ordering(50) 00:17:35.270 fused_ordering(51) 00:17:35.270 fused_ordering(52) 00:17:35.270 fused_ordering(53) 00:17:35.270 fused_ordering(54) 00:17:35.270 fused_ordering(55) 00:17:35.270 fused_ordering(56) 00:17:35.270 fused_ordering(57) 00:17:35.270 fused_ordering(58) 00:17:35.270 fused_ordering(59) 00:17:35.270 fused_ordering(60) 00:17:35.270 fused_ordering(61) 00:17:35.270 fused_ordering(62) 00:17:35.270 fused_ordering(63) 00:17:35.270 fused_ordering(64) 00:17:35.270 fused_ordering(65) 00:17:35.270 fused_ordering(66) 00:17:35.270 fused_ordering(67) 00:17:35.270 fused_ordering(68) 00:17:35.270 fused_ordering(69) 00:17:35.270 fused_ordering(70) 00:17:35.270 fused_ordering(71) 00:17:35.270 fused_ordering(72) 00:17:35.270 fused_ordering(73) 00:17:35.270 fused_ordering(74) 00:17:35.270 fused_ordering(75) 00:17:35.270 fused_ordering(76) 00:17:35.270 fused_ordering(77) 00:17:35.270 fused_ordering(78) 00:17:35.270 fused_ordering(79) 00:17:35.270 fused_ordering(80) 00:17:35.270 fused_ordering(81) 00:17:35.270 fused_ordering(82) 00:17:35.270 fused_ordering(83) 00:17:35.270 fused_ordering(84) 00:17:35.270 fused_ordering(85) 00:17:35.270 fused_ordering(86) 00:17:35.270 fused_ordering(87) 00:17:35.270 fused_ordering(88) 00:17:35.270 fused_ordering(89) 00:17:35.270 fused_ordering(90) 00:17:35.270 fused_ordering(91) 00:17:35.270 fused_ordering(92) 00:17:35.270 fused_ordering(93) 00:17:35.270 fused_ordering(94) 00:17:35.270 fused_ordering(95) 00:17:35.270 fused_ordering(96) 00:17:35.270 fused_ordering(97) 00:17:35.270 fused_ordering(98) 00:17:35.270 fused_ordering(99) 00:17:35.270 fused_ordering(100) 00:17:35.270 fused_ordering(101) 00:17:35.270 fused_ordering(102) 00:17:35.270 fused_ordering(103) 00:17:35.270 fused_ordering(104) 00:17:35.270 fused_ordering(105) 00:17:35.270 fused_ordering(106) 00:17:35.270 fused_ordering(107) 00:17:35.270 fused_ordering(108) 00:17:35.270 fused_ordering(109) 00:17:35.270 fused_ordering(110) 00:17:35.270 fused_ordering(111) 00:17:35.270 fused_ordering(112) 00:17:35.270 fused_ordering(113) 00:17:35.270 fused_ordering(114) 00:17:35.270 fused_ordering(115) 00:17:35.270 fused_ordering(116) 00:17:35.270 fused_ordering(117) 00:17:35.270 fused_ordering(118) 00:17:35.270 fused_ordering(119) 00:17:35.270 fused_ordering(120) 00:17:35.270 fused_ordering(121) 00:17:35.270 fused_ordering(122) 00:17:35.270 fused_ordering(123) 00:17:35.270 fused_ordering(124) 00:17:35.270 fused_ordering(125) 00:17:35.270 fused_ordering(126) 00:17:35.270 fused_ordering(127) 00:17:35.270 fused_ordering(128) 00:17:35.270 fused_ordering(129) 00:17:35.270 fused_ordering(130) 00:17:35.270 fused_ordering(131) 00:17:35.270 fused_ordering(132) 00:17:35.270 fused_ordering(133) 00:17:35.270 fused_ordering(134) 00:17:35.270 fused_ordering(135) 00:17:35.270 fused_ordering(136) 00:17:35.270 fused_ordering(137) 00:17:35.270 fused_ordering(138) 00:17:35.270 fused_ordering(139) 00:17:35.270 fused_ordering(140) 00:17:35.270 fused_ordering(141) 00:17:35.270 fused_ordering(142) 00:17:35.270 fused_ordering(143) 00:17:35.270 fused_ordering(144) 00:17:35.270 fused_ordering(145) 00:17:35.270 fused_ordering(146) 00:17:35.270 fused_ordering(147) 00:17:35.270 fused_ordering(148) 00:17:35.270 fused_ordering(149) 00:17:35.270 fused_ordering(150) 00:17:35.270 fused_ordering(151) 00:17:35.270 fused_ordering(152) 00:17:35.270 fused_ordering(153) 00:17:35.270 fused_ordering(154) 00:17:35.270 fused_ordering(155) 00:17:35.270 fused_ordering(156) 00:17:35.270 fused_ordering(157) 00:17:35.270 fused_ordering(158) 00:17:35.270 fused_ordering(159) 00:17:35.270 fused_ordering(160) 00:17:35.270 fused_ordering(161) 00:17:35.270 fused_ordering(162) 00:17:35.270 fused_ordering(163) 00:17:35.270 fused_ordering(164) 00:17:35.270 fused_ordering(165) 00:17:35.270 fused_ordering(166) 00:17:35.270 fused_ordering(167) 00:17:35.270 fused_ordering(168) 00:17:35.270 fused_ordering(169) 00:17:35.270 fused_ordering(170) 00:17:35.270 fused_ordering(171) 00:17:35.270 fused_ordering(172) 00:17:35.270 fused_ordering(173) 00:17:35.270 fused_ordering(174) 00:17:35.270 fused_ordering(175) 00:17:35.270 fused_ordering(176) 00:17:35.270 fused_ordering(177) 00:17:35.270 fused_ordering(178) 00:17:35.270 fused_ordering(179) 00:17:35.270 fused_ordering(180) 00:17:35.270 fused_ordering(181) 00:17:35.270 fused_ordering(182) 00:17:35.270 fused_ordering(183) 00:17:35.270 fused_ordering(184) 00:17:35.270 fused_ordering(185) 00:17:35.270 fused_ordering(186) 00:17:35.270 fused_ordering(187) 00:17:35.270 fused_ordering(188) 00:17:35.270 fused_ordering(189) 00:17:35.270 fused_ordering(190) 00:17:35.270 fused_ordering(191) 00:17:35.270 fused_ordering(192) 00:17:35.270 fused_ordering(193) 00:17:35.270 fused_ordering(194) 00:17:35.270 fused_ordering(195) 00:17:35.270 fused_ordering(196) 00:17:35.270 fused_ordering(197) 00:17:35.271 fused_ordering(198) 00:17:35.271 fused_ordering(199) 00:17:35.271 fused_ordering(200) 00:17:35.271 fused_ordering(201) 00:17:35.271 fused_ordering(202) 00:17:35.271 fused_ordering(203) 00:17:35.271 fused_ordering(204) 00:17:35.271 fused_ordering(205) 00:17:35.530 fused_ordering(206) 00:17:35.530 fused_ordering(207) 00:17:35.530 fused_ordering(208) 00:17:35.530 fused_ordering(209) 00:17:35.530 fused_ordering(210) 00:17:35.530 fused_ordering(211) 00:17:35.530 fused_ordering(212) 00:17:35.530 fused_ordering(213) 00:17:35.530 fused_ordering(214) 00:17:35.530 fused_ordering(215) 00:17:35.530 fused_ordering(216) 00:17:35.530 fused_ordering(217) 00:17:35.530 fused_ordering(218) 00:17:35.530 fused_ordering(219) 00:17:35.530 fused_ordering(220) 00:17:35.530 fused_ordering(221) 00:17:35.530 fused_ordering(222) 00:17:35.530 fused_ordering(223) 00:17:35.530 fused_ordering(224) 00:17:35.530 fused_ordering(225) 00:17:35.530 fused_ordering(226) 00:17:35.530 fused_ordering(227) 00:17:35.530 fused_ordering(228) 00:17:35.530 fused_ordering(229) 00:17:35.530 fused_ordering(230) 00:17:35.530 fused_ordering(231) 00:17:35.530 fused_ordering(232) 00:17:35.530 fused_ordering(233) 00:17:35.530 fused_ordering(234) 00:17:35.530 fused_ordering(235) 00:17:35.530 fused_ordering(236) 00:17:35.530 fused_ordering(237) 00:17:35.530 fused_ordering(238) 00:17:35.530 fused_ordering(239) 00:17:35.530 fused_ordering(240) 00:17:35.530 fused_ordering(241) 00:17:35.530 fused_ordering(242) 00:17:35.530 fused_ordering(243) 00:17:35.530 fused_ordering(244) 00:17:35.530 fused_ordering(245) 00:17:35.530 fused_ordering(246) 00:17:35.530 fused_ordering(247) 00:17:35.530 fused_ordering(248) 00:17:35.530 fused_ordering(249) 00:17:35.530 fused_ordering(250) 00:17:35.530 fused_ordering(251) 00:17:35.530 fused_ordering(252) 00:17:35.530 fused_ordering(253) 00:17:35.530 fused_ordering(254) 00:17:35.530 fused_ordering(255) 00:17:35.530 fused_ordering(256) 00:17:35.530 fused_ordering(257) 00:17:35.530 fused_ordering(258) 00:17:35.530 fused_ordering(259) 00:17:35.530 fused_ordering(260) 00:17:35.530 fused_ordering(261) 00:17:35.530 fused_ordering(262) 00:17:35.530 fused_ordering(263) 00:17:35.530 fused_ordering(264) 00:17:35.530 fused_ordering(265) 00:17:35.530 fused_ordering(266) 00:17:35.530 fused_ordering(267) 00:17:35.530 fused_ordering(268) 00:17:35.530 fused_ordering(269) 00:17:35.530 fused_ordering(270) 00:17:35.530 fused_ordering(271) 00:17:35.530 fused_ordering(272) 00:17:35.530 fused_ordering(273) 00:17:35.530 fused_ordering(274) 00:17:35.530 fused_ordering(275) 00:17:35.530 fused_ordering(276) 00:17:35.530 fused_ordering(277) 00:17:35.530 fused_ordering(278) 00:17:35.530 fused_ordering(279) 00:17:35.530 fused_ordering(280) 00:17:35.530 fused_ordering(281) 00:17:35.530 fused_ordering(282) 00:17:35.530 fused_ordering(283) 00:17:35.530 fused_ordering(284) 00:17:35.530 fused_ordering(285) 00:17:35.530 fused_ordering(286) 00:17:35.530 fused_ordering(287) 00:17:35.530 fused_ordering(288) 00:17:35.530 fused_ordering(289) 00:17:35.530 fused_ordering(290) 00:17:35.530 fused_ordering(291) 00:17:35.530 fused_ordering(292) 00:17:35.530 fused_ordering(293) 00:17:35.530 fused_ordering(294) 00:17:35.530 fused_ordering(295) 00:17:35.530 fused_ordering(296) 00:17:35.530 fused_ordering(297) 00:17:35.530 fused_ordering(298) 00:17:35.530 fused_ordering(299) 00:17:35.530 fused_ordering(300) 00:17:35.530 fused_ordering(301) 00:17:35.530 fused_ordering(302) 00:17:35.530 fused_ordering(303) 00:17:35.530 fused_ordering(304) 00:17:35.530 fused_ordering(305) 00:17:35.530 fused_ordering(306) 00:17:35.530 fused_ordering(307) 00:17:35.530 fused_ordering(308) 00:17:35.530 fused_ordering(309) 00:17:35.530 fused_ordering(310) 00:17:35.530 fused_ordering(311) 00:17:35.530 fused_ordering(312) 00:17:35.530 fused_ordering(313) 00:17:35.530 fused_ordering(314) 00:17:35.530 fused_ordering(315) 00:17:35.530 fused_ordering(316) 00:17:35.530 fused_ordering(317) 00:17:35.530 fused_ordering(318) 00:17:35.530 fused_ordering(319) 00:17:35.530 fused_ordering(320) 00:17:35.530 fused_ordering(321) 00:17:35.530 fused_ordering(322) 00:17:35.530 fused_ordering(323) 00:17:35.530 fused_ordering(324) 00:17:35.530 fused_ordering(325) 00:17:35.530 fused_ordering(326) 00:17:35.530 fused_ordering(327) 00:17:35.530 fused_ordering(328) 00:17:35.530 fused_ordering(329) 00:17:35.530 fused_ordering(330) 00:17:35.530 fused_ordering(331) 00:17:35.530 fused_ordering(332) 00:17:35.530 fused_ordering(333) 00:17:35.530 fused_ordering(334) 00:17:35.530 fused_ordering(335) 00:17:35.530 fused_ordering(336) 00:17:35.530 fused_ordering(337) 00:17:35.530 fused_ordering(338) 00:17:35.530 fused_ordering(339) 00:17:35.530 fused_ordering(340) 00:17:35.530 fused_ordering(341) 00:17:35.530 fused_ordering(342) 00:17:35.530 fused_ordering(343) 00:17:35.530 fused_ordering(344) 00:17:35.530 fused_ordering(345) 00:17:35.530 fused_ordering(346) 00:17:35.530 fused_ordering(347) 00:17:35.530 fused_ordering(348) 00:17:35.530 fused_ordering(349) 00:17:35.530 fused_ordering(350) 00:17:35.530 fused_ordering(351) 00:17:35.530 fused_ordering(352) 00:17:35.530 fused_ordering(353) 00:17:35.530 fused_ordering(354) 00:17:35.530 fused_ordering(355) 00:17:35.530 fused_ordering(356) 00:17:35.530 fused_ordering(357) 00:17:35.530 fused_ordering(358) 00:17:35.530 fused_ordering(359) 00:17:35.530 fused_ordering(360) 00:17:35.530 fused_ordering(361) 00:17:35.530 fused_ordering(362) 00:17:35.530 fused_ordering(363) 00:17:35.530 fused_ordering(364) 00:17:35.530 fused_ordering(365) 00:17:35.530 fused_ordering(366) 00:17:35.530 fused_ordering(367) 00:17:35.530 fused_ordering(368) 00:17:35.530 fused_ordering(369) 00:17:35.530 fused_ordering(370) 00:17:35.530 fused_ordering(371) 00:17:35.530 fused_ordering(372) 00:17:35.530 fused_ordering(373) 00:17:35.530 fused_ordering(374) 00:17:35.530 fused_ordering(375) 00:17:35.530 fused_ordering(376) 00:17:35.530 fused_ordering(377) 00:17:35.530 fused_ordering(378) 00:17:35.530 fused_ordering(379) 00:17:35.530 fused_ordering(380) 00:17:35.530 fused_ordering(381) 00:17:35.530 fused_ordering(382) 00:17:35.530 fused_ordering(383) 00:17:35.530 fused_ordering(384) 00:17:35.531 fused_ordering(385) 00:17:35.531 fused_ordering(386) 00:17:35.531 fused_ordering(387) 00:17:35.531 fused_ordering(388) 00:17:35.531 fused_ordering(389) 00:17:35.531 fused_ordering(390) 00:17:35.531 fused_ordering(391) 00:17:35.531 fused_ordering(392) 00:17:35.531 fused_ordering(393) 00:17:35.531 fused_ordering(394) 00:17:35.531 fused_ordering(395) 00:17:35.531 fused_ordering(396) 00:17:35.531 fused_ordering(397) 00:17:35.531 fused_ordering(398) 00:17:35.531 fused_ordering(399) 00:17:35.531 fused_ordering(400) 00:17:35.531 fused_ordering(401) 00:17:35.531 fused_ordering(402) 00:17:35.531 fused_ordering(403) 00:17:35.531 fused_ordering(404) 00:17:35.531 fused_ordering(405) 00:17:35.531 fused_ordering(406) 00:17:35.531 fused_ordering(407) 00:17:35.531 fused_ordering(408) 00:17:35.531 fused_ordering(409) 00:17:35.531 fused_ordering(410) 00:17:36.100 fused_ordering(411) 00:17:36.100 fused_ordering(412) 00:17:36.100 fused_ordering(413) 00:17:36.100 fused_ordering(414) 00:17:36.100 fused_ordering(415) 00:17:36.100 fused_ordering(416) 00:17:36.100 fused_ordering(417) 00:17:36.100 fused_ordering(418) 00:17:36.100 fused_ordering(419) 00:17:36.100 fused_ordering(420) 00:17:36.100 fused_ordering(421) 00:17:36.100 fused_ordering(422) 00:17:36.100 fused_ordering(423) 00:17:36.100 fused_ordering(424) 00:17:36.100 fused_ordering(425) 00:17:36.100 fused_ordering(426) 00:17:36.100 fused_ordering(427) 00:17:36.100 fused_ordering(428) 00:17:36.101 fused_ordering(429) 00:17:36.101 fused_ordering(430) 00:17:36.101 fused_ordering(431) 00:17:36.101 fused_ordering(432) 00:17:36.101 fused_ordering(433) 00:17:36.101 fused_ordering(434) 00:17:36.101 fused_ordering(435) 00:17:36.101 fused_ordering(436) 00:17:36.101 fused_ordering(437) 00:17:36.101 fused_ordering(438) 00:17:36.101 fused_ordering(439) 00:17:36.101 fused_ordering(440) 00:17:36.101 fused_ordering(441) 00:17:36.101 fused_ordering(442) 00:17:36.101 fused_ordering(443) 00:17:36.101 fused_ordering(444) 00:17:36.101 fused_ordering(445) 00:17:36.101 fused_ordering(446) 00:17:36.101 fused_ordering(447) 00:17:36.101 fused_ordering(448) 00:17:36.101 fused_ordering(449) 00:17:36.101 fused_ordering(450) 00:17:36.101 fused_ordering(451) 00:17:36.101 fused_ordering(452) 00:17:36.101 fused_ordering(453) 00:17:36.101 fused_ordering(454) 00:17:36.101 fused_ordering(455) 00:17:36.101 fused_ordering(456) 00:17:36.101 fused_ordering(457) 00:17:36.101 fused_ordering(458) 00:17:36.101 fused_ordering(459) 00:17:36.101 fused_ordering(460) 00:17:36.101 fused_ordering(461) 00:17:36.101 fused_ordering(462) 00:17:36.101 fused_ordering(463) 00:17:36.101 fused_ordering(464) 00:17:36.101 fused_ordering(465) 00:17:36.101 fused_ordering(466) 00:17:36.101 fused_ordering(467) 00:17:36.101 fused_ordering(468) 00:17:36.101 fused_ordering(469) 00:17:36.101 fused_ordering(470) 00:17:36.101 fused_ordering(471) 00:17:36.101 fused_ordering(472) 00:17:36.101 fused_ordering(473) 00:17:36.101 fused_ordering(474) 00:17:36.101 fused_ordering(475) 00:17:36.101 fused_ordering(476) 00:17:36.101 fused_ordering(477) 00:17:36.101 fused_ordering(478) 00:17:36.101 fused_ordering(479) 00:17:36.101 fused_ordering(480) 00:17:36.101 fused_ordering(481) 00:17:36.101 fused_ordering(482) 00:17:36.101 fused_ordering(483) 00:17:36.101 fused_ordering(484) 00:17:36.101 fused_ordering(485) 00:17:36.101 fused_ordering(486) 00:17:36.101 fused_ordering(487) 00:17:36.101 fused_ordering(488) 00:17:36.101 fused_ordering(489) 00:17:36.101 fused_ordering(490) 00:17:36.101 fused_ordering(491) 00:17:36.101 fused_ordering(492) 00:17:36.101 fused_ordering(493) 00:17:36.101 fused_ordering(494) 00:17:36.101 fused_ordering(495) 00:17:36.101 fused_ordering(496) 00:17:36.101 fused_ordering(497) 00:17:36.101 fused_ordering(498) 00:17:36.101 fused_ordering(499) 00:17:36.101 fused_ordering(500) 00:17:36.101 fused_ordering(501) 00:17:36.101 fused_ordering(502) 00:17:36.101 fused_ordering(503) 00:17:36.101 fused_ordering(504) 00:17:36.101 fused_ordering(505) 00:17:36.101 fused_ordering(506) 00:17:36.101 fused_ordering(507) 00:17:36.101 fused_ordering(508) 00:17:36.101 fused_ordering(509) 00:17:36.101 fused_ordering(510) 00:17:36.101 fused_ordering(511) 00:17:36.101 fused_ordering(512) 00:17:36.101 fused_ordering(513) 00:17:36.101 fused_ordering(514) 00:17:36.101 fused_ordering(515) 00:17:36.101 fused_ordering(516) 00:17:36.101 fused_ordering(517) 00:17:36.101 fused_ordering(518) 00:17:36.101 fused_ordering(519) 00:17:36.101 fused_ordering(520) 00:17:36.101 fused_ordering(521) 00:17:36.101 fused_ordering(522) 00:17:36.101 fused_ordering(523) 00:17:36.101 fused_ordering(524) 00:17:36.101 fused_ordering(525) 00:17:36.101 fused_ordering(526) 00:17:36.101 fused_ordering(527) 00:17:36.101 fused_ordering(528) 00:17:36.101 fused_ordering(529) 00:17:36.101 fused_ordering(530) 00:17:36.101 fused_ordering(531) 00:17:36.101 fused_ordering(532) 00:17:36.101 fused_ordering(533) 00:17:36.101 fused_ordering(534) 00:17:36.101 fused_ordering(535) 00:17:36.101 fused_ordering(536) 00:17:36.101 fused_ordering(537) 00:17:36.101 fused_ordering(538) 00:17:36.101 fused_ordering(539) 00:17:36.101 fused_ordering(540) 00:17:36.101 fused_ordering(541) 00:17:36.101 fused_ordering(542) 00:17:36.101 fused_ordering(543) 00:17:36.101 fused_ordering(544) 00:17:36.101 fused_ordering(545) 00:17:36.101 fused_ordering(546) 00:17:36.101 fused_ordering(547) 00:17:36.101 fused_ordering(548) 00:17:36.101 fused_ordering(549) 00:17:36.101 fused_ordering(550) 00:17:36.101 fused_ordering(551) 00:17:36.101 fused_ordering(552) 00:17:36.101 fused_ordering(553) 00:17:36.101 fused_ordering(554) 00:17:36.101 fused_ordering(555) 00:17:36.101 fused_ordering(556) 00:17:36.101 fused_ordering(557) 00:17:36.101 fused_ordering(558) 00:17:36.101 fused_ordering(559) 00:17:36.101 fused_ordering(560) 00:17:36.101 fused_ordering(561) 00:17:36.101 fused_ordering(562) 00:17:36.101 fused_ordering(563) 00:17:36.101 fused_ordering(564) 00:17:36.101 fused_ordering(565) 00:17:36.101 fused_ordering(566) 00:17:36.101 fused_ordering(567) 00:17:36.101 fused_ordering(568) 00:17:36.101 fused_ordering(569) 00:17:36.101 fused_ordering(570) 00:17:36.101 fused_ordering(571) 00:17:36.101 fused_ordering(572) 00:17:36.101 fused_ordering(573) 00:17:36.101 fused_ordering(574) 00:17:36.101 fused_ordering(575) 00:17:36.101 fused_ordering(576) 00:17:36.101 fused_ordering(577) 00:17:36.101 fused_ordering(578) 00:17:36.101 fused_ordering(579) 00:17:36.101 fused_ordering(580) 00:17:36.101 fused_ordering(581) 00:17:36.101 fused_ordering(582) 00:17:36.101 fused_ordering(583) 00:17:36.101 fused_ordering(584) 00:17:36.101 fused_ordering(585) 00:17:36.101 fused_ordering(586) 00:17:36.101 fused_ordering(587) 00:17:36.101 fused_ordering(588) 00:17:36.101 fused_ordering(589) 00:17:36.101 fused_ordering(590) 00:17:36.101 fused_ordering(591) 00:17:36.101 fused_ordering(592) 00:17:36.101 fused_ordering(593) 00:17:36.101 fused_ordering(594) 00:17:36.101 fused_ordering(595) 00:17:36.101 fused_ordering(596) 00:17:36.101 fused_ordering(597) 00:17:36.101 fused_ordering(598) 00:17:36.101 fused_ordering(599) 00:17:36.101 fused_ordering(600) 00:17:36.101 fused_ordering(601) 00:17:36.101 fused_ordering(602) 00:17:36.101 fused_ordering(603) 00:17:36.101 fused_ordering(604) 00:17:36.101 fused_ordering(605) 00:17:36.101 fused_ordering(606) 00:17:36.101 fused_ordering(607) 00:17:36.101 fused_ordering(608) 00:17:36.101 fused_ordering(609) 00:17:36.101 fused_ordering(610) 00:17:36.101 fused_ordering(611) 00:17:36.101 fused_ordering(612) 00:17:36.101 fused_ordering(613) 00:17:36.101 fused_ordering(614) 00:17:36.101 fused_ordering(615) 00:17:36.362 fused_ordering(616) 00:17:36.362 fused_ordering(617) 00:17:36.362 fused_ordering(618) 00:17:36.362 fused_ordering(619) 00:17:36.362 fused_ordering(620) 00:17:36.362 fused_ordering(621) 00:17:36.362 fused_ordering(622) 00:17:36.362 fused_ordering(623) 00:17:36.362 fused_ordering(624) 00:17:36.362 fused_ordering(625) 00:17:36.362 fused_ordering(626) 00:17:36.362 fused_ordering(627) 00:17:36.362 fused_ordering(628) 00:17:36.362 fused_ordering(629) 00:17:36.362 fused_ordering(630) 00:17:36.362 fused_ordering(631) 00:17:36.362 fused_ordering(632) 00:17:36.362 fused_ordering(633) 00:17:36.362 fused_ordering(634) 00:17:36.362 fused_ordering(635) 00:17:36.362 fused_ordering(636) 00:17:36.362 fused_ordering(637) 00:17:36.362 fused_ordering(638) 00:17:36.362 fused_ordering(639) 00:17:36.362 fused_ordering(640) 00:17:36.362 fused_ordering(641) 00:17:36.362 fused_ordering(642) 00:17:36.362 fused_ordering(643) 00:17:36.362 fused_ordering(644) 00:17:36.362 fused_ordering(645) 00:17:36.362 fused_ordering(646) 00:17:36.362 fused_ordering(647) 00:17:36.362 fused_ordering(648) 00:17:36.362 fused_ordering(649) 00:17:36.362 fused_ordering(650) 00:17:36.362 fused_ordering(651) 00:17:36.362 fused_ordering(652) 00:17:36.362 fused_ordering(653) 00:17:36.362 fused_ordering(654) 00:17:36.362 fused_ordering(655) 00:17:36.362 fused_ordering(656) 00:17:36.362 fused_ordering(657) 00:17:36.362 fused_ordering(658) 00:17:36.362 fused_ordering(659) 00:17:36.362 fused_ordering(660) 00:17:36.362 fused_ordering(661) 00:17:36.362 fused_ordering(662) 00:17:36.362 fused_ordering(663) 00:17:36.362 fused_ordering(664) 00:17:36.362 fused_ordering(665) 00:17:36.362 fused_ordering(666) 00:17:36.362 fused_ordering(667) 00:17:36.362 fused_ordering(668) 00:17:36.362 fused_ordering(669) 00:17:36.362 fused_ordering(670) 00:17:36.362 fused_ordering(671) 00:17:36.362 fused_ordering(672) 00:17:36.362 fused_ordering(673) 00:17:36.362 fused_ordering(674) 00:17:36.362 fused_ordering(675) 00:17:36.362 fused_ordering(676) 00:17:36.362 fused_ordering(677) 00:17:36.362 fused_ordering(678) 00:17:36.362 fused_ordering(679) 00:17:36.362 fused_ordering(680) 00:17:36.362 fused_ordering(681) 00:17:36.362 fused_ordering(682) 00:17:36.362 fused_ordering(683) 00:17:36.362 fused_ordering(684) 00:17:36.362 fused_ordering(685) 00:17:36.362 fused_ordering(686) 00:17:36.362 fused_ordering(687) 00:17:36.362 fused_ordering(688) 00:17:36.362 fused_ordering(689) 00:17:36.362 fused_ordering(690) 00:17:36.362 fused_ordering(691) 00:17:36.362 fused_ordering(692) 00:17:36.362 fused_ordering(693) 00:17:36.362 fused_ordering(694) 00:17:36.362 fused_ordering(695) 00:17:36.362 fused_ordering(696) 00:17:36.362 fused_ordering(697) 00:17:36.362 fused_ordering(698) 00:17:36.362 fused_ordering(699) 00:17:36.362 fused_ordering(700) 00:17:36.362 fused_ordering(701) 00:17:36.362 fused_ordering(702) 00:17:36.362 fused_ordering(703) 00:17:36.362 fused_ordering(704) 00:17:36.362 fused_ordering(705) 00:17:36.362 fused_ordering(706) 00:17:36.362 fused_ordering(707) 00:17:36.362 fused_ordering(708) 00:17:36.363 fused_ordering(709) 00:17:36.363 fused_ordering(710) 00:17:36.363 fused_ordering(711) 00:17:36.363 fused_ordering(712) 00:17:36.363 fused_ordering(713) 00:17:36.363 fused_ordering(714) 00:17:36.363 fused_ordering(715) 00:17:36.363 fused_ordering(716) 00:17:36.363 fused_ordering(717) 00:17:36.363 fused_ordering(718) 00:17:36.363 fused_ordering(719) 00:17:36.363 fused_ordering(720) 00:17:36.363 fused_ordering(721) 00:17:36.363 fused_ordering(722) 00:17:36.363 fused_ordering(723) 00:17:36.363 fused_ordering(724) 00:17:36.363 fused_ordering(725) 00:17:36.363 fused_ordering(726) 00:17:36.363 fused_ordering(727) 00:17:36.363 fused_ordering(728) 00:17:36.363 fused_ordering(729) 00:17:36.363 fused_ordering(730) 00:17:36.363 fused_ordering(731) 00:17:36.363 fused_ordering(732) 00:17:36.363 fused_ordering(733) 00:17:36.363 fused_ordering(734) 00:17:36.363 fused_ordering(735) 00:17:36.363 fused_ordering(736) 00:17:36.363 fused_ordering(737) 00:17:36.363 fused_ordering(738) 00:17:36.363 fused_ordering(739) 00:17:36.363 fused_ordering(740) 00:17:36.363 fused_ordering(741) 00:17:36.363 fused_ordering(742) 00:17:36.363 fused_ordering(743) 00:17:36.363 fused_ordering(744) 00:17:36.363 fused_ordering(745) 00:17:36.363 fused_ordering(746) 00:17:36.363 fused_ordering(747) 00:17:36.363 fused_ordering(748) 00:17:36.363 fused_ordering(749) 00:17:36.363 fused_ordering(750) 00:17:36.363 fused_ordering(751) 00:17:36.363 fused_ordering(752) 00:17:36.363 fused_ordering(753) 00:17:36.363 fused_ordering(754) 00:17:36.363 fused_ordering(755) 00:17:36.363 fused_ordering(756) 00:17:36.363 fused_ordering(757) 00:17:36.363 fused_ordering(758) 00:17:36.363 fused_ordering(759) 00:17:36.363 fused_ordering(760) 00:17:36.363 fused_ordering(761) 00:17:36.363 fused_ordering(762) 00:17:36.363 fused_ordering(763) 00:17:36.363 fused_ordering(764) 00:17:36.363 fused_ordering(765) 00:17:36.363 fused_ordering(766) 00:17:36.363 fused_ordering(767) 00:17:36.363 fused_ordering(768) 00:17:36.363 fused_ordering(769) 00:17:36.363 fused_ordering(770) 00:17:36.363 fused_ordering(771) 00:17:36.363 fused_ordering(772) 00:17:36.363 fused_ordering(773) 00:17:36.363 fused_ordering(774) 00:17:36.363 fused_ordering(775) 00:17:36.363 fused_ordering(776) 00:17:36.363 fused_ordering(777) 00:17:36.363 fused_ordering(778) 00:17:36.363 fused_ordering(779) 00:17:36.363 fused_ordering(780) 00:17:36.363 fused_ordering(781) 00:17:36.363 fused_ordering(782) 00:17:36.363 fused_ordering(783) 00:17:36.363 fused_ordering(784) 00:17:36.363 fused_ordering(785) 00:17:36.363 fused_ordering(786) 00:17:36.363 fused_ordering(787) 00:17:36.363 fused_ordering(788) 00:17:36.363 fused_ordering(789) 00:17:36.363 fused_ordering(790) 00:17:36.363 fused_ordering(791) 00:17:36.363 fused_ordering(792) 00:17:36.363 fused_ordering(793) 00:17:36.363 fused_ordering(794) 00:17:36.363 fused_ordering(795) 00:17:36.363 fused_ordering(796) 00:17:36.363 fused_ordering(797) 00:17:36.363 fused_ordering(798) 00:17:36.363 fused_ordering(799) 00:17:36.363 fused_ordering(800) 00:17:36.363 fused_ordering(801) 00:17:36.363 fused_ordering(802) 00:17:36.363 fused_ordering(803) 00:17:36.363 fused_ordering(804) 00:17:36.363 fused_ordering(805) 00:17:36.363 fused_ordering(806) 00:17:36.363 fused_ordering(807) 00:17:36.363 fused_ordering(808) 00:17:36.363 fused_ordering(809) 00:17:36.363 fused_ordering(810) 00:17:36.363 fused_ordering(811) 00:17:36.363 fused_ordering(812) 00:17:36.363 fused_ordering(813) 00:17:36.363 fused_ordering(814) 00:17:36.363 fused_ordering(815) 00:17:36.363 fused_ordering(816) 00:17:36.363 fused_ordering(817) 00:17:36.363 fused_ordering(818) 00:17:36.363 fused_ordering(819) 00:17:36.363 fused_ordering(820) 00:17:36.931 fused_ordering(821) 00:17:36.931 fused_ordering(822) 00:17:36.931 fused_ordering(823) 00:17:36.931 fused_ordering(824) 00:17:36.931 fused_ordering(825) 00:17:36.931 fused_ordering(826) 00:17:36.931 fused_ordering(827) 00:17:36.931 fused_ordering(828) 00:17:36.931 fused_ordering(829) 00:17:36.932 fused_ordering(830) 00:17:36.932 fused_ordering(831) 00:17:36.932 fused_ordering(832) 00:17:36.932 fused_ordering(833) 00:17:36.932 fused_ordering(834) 00:17:36.932 fused_ordering(835) 00:17:36.932 fused_ordering(836) 00:17:36.932 fused_ordering(837) 00:17:36.932 fused_ordering(838) 00:17:36.932 fused_ordering(839) 00:17:36.932 fused_ordering(840) 00:17:36.932 fused_ordering(841) 00:17:36.932 fused_ordering(842) 00:17:36.932 fused_ordering(843) 00:17:36.932 fused_ordering(844) 00:17:36.932 fused_ordering(845) 00:17:36.932 fused_ordering(846) 00:17:36.932 fused_ordering(847) 00:17:36.932 fused_ordering(848) 00:17:36.932 fused_ordering(849) 00:17:36.932 fused_ordering(850) 00:17:36.932 fused_ordering(851) 00:17:36.932 fused_ordering(852) 00:17:36.932 fused_ordering(853) 00:17:36.932 fused_ordering(854) 00:17:36.932 fused_ordering(855) 00:17:36.932 fused_ordering(856) 00:17:36.932 fused_ordering(857) 00:17:36.932 fused_ordering(858) 00:17:36.932 fused_ordering(859) 00:17:36.932 fused_ordering(860) 00:17:36.932 fused_ordering(861) 00:17:36.932 fused_ordering(862) 00:17:36.932 fused_ordering(863) 00:17:36.932 fused_ordering(864) 00:17:36.932 fused_ordering(865) 00:17:36.932 fused_ordering(866) 00:17:36.932 fused_ordering(867) 00:17:36.932 fused_ordering(868) 00:17:36.932 fused_ordering(869) 00:17:36.932 fused_ordering(870) 00:17:36.932 fused_ordering(871) 00:17:36.932 fused_ordering(872) 00:17:36.932 fused_ordering(873) 00:17:36.932 fused_ordering(874) 00:17:36.932 fused_ordering(875) 00:17:36.932 fused_ordering(876) 00:17:36.932 fused_ordering(877) 00:17:36.932 fused_ordering(878) 00:17:36.932 fused_ordering(879) 00:17:36.932 fused_ordering(880) 00:17:36.932 fused_ordering(881) 00:17:36.932 fused_ordering(882) 00:17:36.932 fused_ordering(883) 00:17:36.932 fused_ordering(884) 00:17:36.932 fused_ordering(885) 00:17:36.932 fused_ordering(886) 00:17:36.932 fused_ordering(887) 00:17:36.932 fused_ordering(888) 00:17:36.932 fused_ordering(889) 00:17:36.932 fused_ordering(890) 00:17:36.932 fused_ordering(891) 00:17:36.932 fused_ordering(892) 00:17:36.932 fused_ordering(893) 00:17:36.932 fused_ordering(894) 00:17:36.932 fused_ordering(895) 00:17:36.932 fused_ordering(896) 00:17:36.932 fused_ordering(897) 00:17:36.932 fused_ordering(898) 00:17:36.932 fused_ordering(899) 00:17:36.932 fused_ordering(900) 00:17:36.932 fused_ordering(901) 00:17:36.932 fused_ordering(902) 00:17:36.932 fused_ordering(903) 00:17:36.932 fused_ordering(904) 00:17:36.932 fused_ordering(905) 00:17:36.932 fused_ordering(906) 00:17:36.932 fused_ordering(907) 00:17:36.932 fused_ordering(908) 00:17:36.932 fused_ordering(909) 00:17:36.932 fused_ordering(910) 00:17:36.932 fused_ordering(911) 00:17:36.932 fused_ordering(912) 00:17:36.932 fused_ordering(913) 00:17:36.932 fused_ordering(914) 00:17:36.932 fused_ordering(915) 00:17:36.932 fused_ordering(916) 00:17:36.932 fused_ordering(917) 00:17:36.932 fused_ordering(918) 00:17:36.932 fused_ordering(919) 00:17:36.932 fused_ordering(920) 00:17:36.932 fused_ordering(921) 00:17:36.932 fused_ordering(922) 00:17:36.932 fused_ordering(923) 00:17:36.932 fused_ordering(924) 00:17:36.932 fused_ordering(925) 00:17:36.932 fused_ordering(926) 00:17:36.932 fused_ordering(927) 00:17:36.932 fused_ordering(928) 00:17:36.932 fused_ordering(929) 00:17:36.932 fused_ordering(930) 00:17:36.932 fused_ordering(931) 00:17:36.932 fused_ordering(932) 00:17:36.932 fused_ordering(933) 00:17:36.932 fused_ordering(934) 00:17:36.932 fused_ordering(935) 00:17:36.932 fused_ordering(936) 00:17:36.932 fused_ordering(937) 00:17:36.932 fused_ordering(938) 00:17:36.932 fused_ordering(939) 00:17:36.932 fused_ordering(940) 00:17:36.932 fused_ordering(941) 00:17:36.932 fused_ordering(942) 00:17:36.932 fused_ordering(943) 00:17:36.932 fused_ordering(944) 00:17:36.932 fused_ordering(945) 00:17:36.932 fused_ordering(946) 00:17:36.932 fused_ordering(947) 00:17:36.932 fused_ordering(948) 00:17:36.932 fused_ordering(949) 00:17:36.932 fused_ordering(950) 00:17:36.932 fused_ordering(951) 00:17:36.932 fused_ordering(952) 00:17:36.932 fused_ordering(953) 00:17:36.932 fused_ordering(954) 00:17:36.932 fused_ordering(955) 00:17:36.932 fused_ordering(956) 00:17:36.932 fused_ordering(957) 00:17:36.932 fused_ordering(958) 00:17:36.932 fused_ordering(959) 00:17:36.932 fused_ordering(960) 00:17:36.932 fused_ordering(961) 00:17:36.932 fused_ordering(962) 00:17:36.932 fused_ordering(963) 00:17:36.932 fused_ordering(964) 00:17:36.932 fused_ordering(965) 00:17:36.932 fused_ordering(966) 00:17:36.932 fused_ordering(967) 00:17:36.932 fused_ordering(968) 00:17:36.932 fused_ordering(969) 00:17:36.932 fused_ordering(970) 00:17:36.932 fused_ordering(971) 00:17:36.932 fused_ordering(972) 00:17:36.932 fused_ordering(973) 00:17:36.932 fused_ordering(974) 00:17:36.932 fused_ordering(975) 00:17:36.932 fused_ordering(976) 00:17:36.932 fused_ordering(977) 00:17:36.932 fused_ordering(978) 00:17:36.932 fused_ordering(979) 00:17:36.932 fused_ordering(980) 00:17:36.932 fused_ordering(981) 00:17:36.932 fused_ordering(982) 00:17:36.932 fused_ordering(983) 00:17:36.932 fused_ordering(984) 00:17:36.932 fused_ordering(985) 00:17:36.932 fused_ordering(986) 00:17:36.932 fused_ordering(987) 00:17:36.932 fused_ordering(988) 00:17:36.932 fused_ordering(989) 00:17:36.932 fused_ordering(990) 00:17:36.932 fused_ordering(991) 00:17:36.932 fused_ordering(992) 00:17:36.932 fused_ordering(993) 00:17:36.932 fused_ordering(994) 00:17:36.932 fused_ordering(995) 00:17:36.932 fused_ordering(996) 00:17:36.932 fused_ordering(997) 00:17:36.932 fused_ordering(998) 00:17:36.932 fused_ordering(999) 00:17:36.932 fused_ordering(1000) 00:17:36.932 fused_ordering(1001) 00:17:36.932 fused_ordering(1002) 00:17:36.932 fused_ordering(1003) 00:17:36.932 fused_ordering(1004) 00:17:36.932 fused_ordering(1005) 00:17:36.932 fused_ordering(1006) 00:17:36.932 fused_ordering(1007) 00:17:36.932 fused_ordering(1008) 00:17:36.932 fused_ordering(1009) 00:17:36.932 fused_ordering(1010) 00:17:36.932 fused_ordering(1011) 00:17:36.932 fused_ordering(1012) 00:17:36.932 fused_ordering(1013) 00:17:36.932 fused_ordering(1014) 00:17:36.932 fused_ordering(1015) 00:17:36.932 fused_ordering(1016) 00:17:36.932 fused_ordering(1017) 00:17:36.932 fused_ordering(1018) 00:17:36.932 fused_ordering(1019) 00:17:36.932 fused_ordering(1020) 00:17:36.932 fused_ordering(1021) 00:17:36.932 fused_ordering(1022) 00:17:36.932 fused_ordering(1023) 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.932 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.932 rmmod nvme_tcp 00:17:36.932 rmmod nvme_fabrics 00:17:37.192 rmmod nvme_keyring 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 230706 ']' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 230706 ']' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230706' 00:17:37.192 killing process with pid 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 230706 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.192 00:44:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.755 00:17:39.755 real 0m7.334s 00:17:39.755 user 0m4.877s 00:17:39.755 sys 0m2.861s 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:39.755 ************************************ 00:17:39.755 END TEST nvmf_fused_ordering 00:17:39.755 ************************************ 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.755 ************************************ 00:17:39.755 START TEST nvmf_ns_masking 00:17:39.755 ************************************ 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:39.755 * Looking for test storage... 00:17:39.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.755 --rc genhtml_branch_coverage=1 00:17:39.755 --rc genhtml_function_coverage=1 00:17:39.755 --rc genhtml_legend=1 00:17:39.755 --rc geninfo_all_blocks=1 00:17:39.755 --rc geninfo_unexecuted_blocks=1 00:17:39.755 00:17:39.755 ' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.755 --rc genhtml_branch_coverage=1 00:17:39.755 --rc genhtml_function_coverage=1 00:17:39.755 --rc genhtml_legend=1 00:17:39.755 --rc geninfo_all_blocks=1 00:17:39.755 --rc geninfo_unexecuted_blocks=1 00:17:39.755 00:17:39.755 ' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.755 --rc genhtml_branch_coverage=1 00:17:39.755 --rc genhtml_function_coverage=1 00:17:39.755 --rc genhtml_legend=1 00:17:39.755 --rc geninfo_all_blocks=1 00:17:39.755 --rc geninfo_unexecuted_blocks=1 00:17:39.755 00:17:39.755 ' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.755 --rc genhtml_branch_coverage=1 00:17:39.755 --rc genhtml_function_coverage=1 00:17:39.755 --rc genhtml_legend=1 00:17:39.755 --rc geninfo_all_blocks=1 00:17:39.755 --rc geninfo_unexecuted_blocks=1 00:17:39.755 00:17:39.755 ' 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.755 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5f1f08b2-d0df-4fb8-842b-3e7e4e14aa70 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e522efa7-613d-44ed-b1d8-612d03073f90 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c07ecd3e-4bca-40bb-ac15-8088f6db925b 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.756 00:44:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:42.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:42.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:42.299 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:42.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:42.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:42.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:17:42.300 00:17:42.300 --- 10.0.0.2 ping statistics --- 00:17:42.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.300 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:17:42.300 00:17:42.300 --- 10.0.0.1 ping statistics --- 00:17:42.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.300 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.300 00:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=232932 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 232932 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 232932 ']' 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.300 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 [2024-12-07 00:44:58.067314] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:42.301 [2024-12-07 00:44:58.067407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.301 [2024-12-07 00:44:58.142609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.301 [2024-12-07 00:44:58.186311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.301 [2024-12-07 00:44:58.186380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.301 [2024-12-07 00:44:58.186402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.301 [2024-12-07 00:44:58.186413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.301 [2024-12-07 00:44:58.186422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.301 [2024-12-07 00:44:58.187034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.301 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.558 [2024-12-07 00:44:58.568380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.558 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:42.558 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:42.558 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:42.816 Malloc1 00:17:42.816 00:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:43.075 Malloc2 00:17:43.075 00:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:43.642 00:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:43.642 00:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.900 [2024-12-07 00:45:00.033527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07ecd3e-4bca-40bb-ac15-8088f6db925b -a 10.0.0.2 -s 4420 -i 4 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:44.159 00:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:46.068 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.327 [ 0]:0x1 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19963fb4e6f84eb1a9b332c7dcb3626b 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19963fb4e6f84eb1a9b332c7dcb3626b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.327 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.586 [ 0]:0x1 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19963fb4e6f84eb1a9b332c7dcb3626b 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19963fb4e6f84eb1a9b332c7dcb3626b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.586 [ 1]:0x2 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:46.586 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.844 00:45:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.103 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07ecd3e-4bca-40bb-ac15-8088f6db925b -a 10.0.0.2 -s 4420 -i 4 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:47.363 00:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:49.932 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.933 [ 0]:0x2 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.933 00:45:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.933 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:49.933 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.933 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.933 [ 0]:0x1 00:17:49.933 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.933 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.192 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19963fb4e6f84eb1a9b332c7dcb3626b 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19963fb4e6f84eb1a9b332c7dcb3626b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.193 [ 1]:0x2 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.193 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:50.451 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.452 [ 0]:0x2 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.452 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.712 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:50.712 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07ecd3e-4bca-40bb-ac15-8088f6db925b -a 10.0.0.2 -s 4420 -i 4 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:50.972 00:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:52.878 00:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.878 [ 0]:0x1 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.878 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=19963fb4e6f84eb1a9b332c7dcb3626b 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 19963fb4e6f84eb1a9b332c7dcb3626b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:53.136 [ 1]:0x2 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.136 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.397 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:53.398 [ 0]:0x2 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:53.398 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:53.656 [2024-12-07 00:45:09.763195] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:53.656 request: 00:17:53.656 { 00:17:53.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.656 "nsid": 2, 00:17:53.656 "host": "nqn.2016-06.io.spdk:host1", 00:17:53.656 "method": "nvmf_ns_remove_host", 00:17:53.656 "req_id": 1 00:17:53.656 } 00:17:53.656 Got JSON-RPC error response 00:17:53.656 response: 00:17:53.656 { 00:17:53.656 "code": -32602, 00:17:53.656 "message": "Invalid parameters" 00:17:53.656 } 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.656 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:53.915 [ 0]:0x2 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccb5b650083c47129637509e94e8bede 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccb5b650083c47129637509e94e8bede != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:53.915 00:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=235167 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 235167 /var/tmp/host.sock 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 235167 ']' 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:53.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.915 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.174 [2024-12-07 00:45:10.107501] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:54.174 [2024-12-07 00:45:10.107600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235167 ] 00:17:54.174 [2024-12-07 00:45:10.176580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.174 [2024-12-07 00:45:10.224946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.432 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.432 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:54.432 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.690 00:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:54.951 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5f1f08b2-d0df-4fb8-842b-3e7e4e14aa70 00:17:54.951 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:55.210 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5F1F08B2D0DF4FB8842B3E7E4E14AA70 -i 00:17:55.469 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e522efa7-613d-44ed-b1d8-612d03073f90 00:17:55.469 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:55.469 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E522EFA7613D44EDB1D8612D03073F90 -i 00:17:55.728 00:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.986 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:56.245 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.245 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.811 nvme0n1 00:17:56.811 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.811 00:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:57.069 nvme1n2 00:17:57.069 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:57.069 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:57.069 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:57.069 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:57.069 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:57.328 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:57.328 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:57.328 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:57.328 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:57.588 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5f1f08b2-d0df-4fb8-842b-3e7e4e14aa70 == \5\f\1\f\0\8\b\2\-\d\0\d\f\-\4\f\b\8\-\8\4\2\b\-\3\e\7\e\4\e\1\4\a\a\7\0 ]] 00:17:57.588 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:57.588 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:57.588 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:57.847 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e522efa7-613d-44ed-b1d8-612d03073f90 == \e\5\2\2\e\f\a\7\-\6\1\3\d\-\4\4\e\d\-\b\1\d\8\-\6\1\2\d\0\3\0\7\3\f\9\0 ]] 00:17:57.847 00:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.106 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5f1f08b2-d0df-4fb8-842b-3e7e4e14aa70 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F1F08B2D0DF4FB8842B3E7E4E14AA70 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F1F08B2D0DF4FB8842B3E7E4E14AA70 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.365 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F1F08B2D0DF4FB8842B3E7E4E14AA70 00:17:58.624 [2024-12-07 00:45:14.721383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:58.624 [2024-12-07 00:45:14.721425] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:58.624 [2024-12-07 00:45:14.721449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.624 request: 00:17:58.624 { 00:17:58.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.624 "namespace": { 00:17:58.624 "bdev_name": "invalid", 00:17:58.624 "nsid": 1, 00:17:58.624 "nguid": "5F1F08B2D0DF4FB8842B3E7E4E14AA70", 00:17:58.624 "no_auto_visible": false, 00:17:58.624 "hide_metadata": false 00:17:58.624 }, 00:17:58.624 "method": "nvmf_subsystem_add_ns", 00:17:58.624 "req_id": 1 00:17:58.624 } 00:17:58.624 Got JSON-RPC error response 00:17:58.624 response: 00:17:58.624 { 00:17:58.624 "code": -32602, 00:17:58.624 "message": "Invalid parameters" 00:17:58.624 } 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5f1f08b2-d0df-4fb8-842b-3e7e4e14aa70 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.624 00:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5F1F08B2D0DF4FB8842B3E7E4E14AA70 -i 00:17:59.194 00:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:01.101 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:01.101 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:01.101 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 235167 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 235167 ']' 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 235167 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235167 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235167' 00:18:01.361 killing process with pid 235167 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 235167 00:18:01.361 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 235167 00:18:01.927 00:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:02.185 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.186 rmmod nvme_tcp 00:18:02.186 rmmod nvme_fabrics 00:18:02.186 rmmod nvme_keyring 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 232932 ']' 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 232932 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 232932 ']' 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 232932 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 232932 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 232932' 00:18:02.186 killing process with pid 232932 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 232932 00:18:02.186 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 232932 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.444 00:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.378 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:04.378 00:18:04.378 real 0m25.076s 00:18:04.378 user 0m36.308s 00:18:04.378 sys 0m4.801s 00:18:04.378 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.378 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.378 ************************************ 00:18:04.378 END TEST nvmf_ns_masking 00:18:04.378 ************************************ 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.638 ************************************ 00:18:04.638 START TEST nvmf_nvme_cli 00:18:04.638 ************************************ 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:04.638 * Looking for test storage... 00:18:04.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.638 --rc genhtml_branch_coverage=1 00:18:04.638 --rc genhtml_function_coverage=1 00:18:04.638 --rc genhtml_legend=1 00:18:04.638 --rc geninfo_all_blocks=1 00:18:04.638 --rc geninfo_unexecuted_blocks=1 00:18:04.638 00:18:04.638 ' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.638 --rc genhtml_branch_coverage=1 00:18:04.638 --rc genhtml_function_coverage=1 00:18:04.638 --rc genhtml_legend=1 00:18:04.638 --rc geninfo_all_blocks=1 00:18:04.638 --rc geninfo_unexecuted_blocks=1 00:18:04.638 00:18:04.638 ' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.638 --rc genhtml_branch_coverage=1 00:18:04.638 --rc genhtml_function_coverage=1 00:18:04.638 --rc genhtml_legend=1 00:18:04.638 --rc geninfo_all_blocks=1 00:18:04.638 --rc geninfo_unexecuted_blocks=1 00:18:04.638 00:18:04.638 ' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.638 --rc genhtml_branch_coverage=1 00:18:04.638 --rc genhtml_function_coverage=1 00:18:04.638 --rc genhtml_legend=1 00:18:04.638 --rc geninfo_all_blocks=1 00:18:04.638 --rc geninfo_unexecuted_blocks=1 00:18:04.638 00:18:04.638 ' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.638 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.639 00:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.327 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.327 00:45:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.327 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:18:07.327 00:18:07.328 --- 10.0.0.2 ping statistics --- 00:18:07.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.328 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:18:07.328 00:18:07.328 --- 10.0.0.1 ping statistics --- 00:18:07.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.328 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=238205 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 238205 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 238205 ']' 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 [2024-12-07 00:45:23.090505] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:07.328 [2024-12-07 00:45:23.090596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.328 [2024-12-07 00:45:23.162967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.328 [2024-12-07 00:45:23.207205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.328 [2024-12-07 00:45:23.207261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.328 [2024-12-07 00:45:23.207289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.328 [2024-12-07 00:45:23.207305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.328 [2024-12-07 00:45:23.207314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.328 [2024-12-07 00:45:23.208876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.328 [2024-12-07 00:45:23.209004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.328 [2024-12-07 00:45:23.209072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.328 [2024-12-07 00:45:23.209076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 [2024-12-07 00:45:23.358625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 Malloc0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 Malloc1 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 [2024-12-07 00:45:23.456259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.328 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:07.590 00:18:07.590 Discovery Log Number of Records 2, Generation counter 2 00:18:07.590 =====Discovery Log Entry 0====== 00:18:07.590 trtype: tcp 00:18:07.590 adrfam: ipv4 00:18:07.590 subtype: current discovery subsystem 00:18:07.590 treq: not required 00:18:07.590 portid: 0 00:18:07.590 trsvcid: 4420 00:18:07.590 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:07.590 traddr: 10.0.0.2 00:18:07.590 eflags: explicit discovery connections, duplicate discovery information 00:18:07.590 sectype: none 00:18:07.590 =====Discovery Log Entry 1====== 00:18:07.590 trtype: tcp 00:18:07.590 adrfam: ipv4 00:18:07.590 subtype: nvme subsystem 00:18:07.590 treq: not required 00:18:07.590 portid: 0 00:18:07.590 trsvcid: 4420 00:18:07.590 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:07.590 traddr: 10.0.0.2 00:18:07.590 eflags: none 00:18:07.590 sectype: none 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:07.590 00:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:08.529 00:45:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:10.442 /dev/nvme0n2 ]] 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.442 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:10.703 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.704 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:10.704 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:10.704 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.704 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:10.704 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:10.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.963 rmmod nvme_tcp 00:18:10.963 rmmod nvme_fabrics 00:18:10.963 rmmod nvme_keyring 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 238205 ']' 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 238205 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 238205 ']' 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 238205 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.963 00:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238205 00:18:10.964 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.964 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.964 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238205' 00:18:10.964 killing process with pid 238205 00:18:10.964 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 238205 00:18:10.964 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 238205 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.223 00:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.772 00:18:13.772 real 0m8.753s 00:18:13.772 user 0m16.821s 00:18:13.772 sys 0m2.384s 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:13.772 ************************************ 00:18:13.772 END TEST nvmf_nvme_cli 00:18:13.772 ************************************ 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.772 ************************************ 00:18:13.772 START TEST nvmf_vfio_user 00:18:13.772 ************************************ 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:13.772 * Looking for test storage... 00:18:13.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.772 --rc genhtml_branch_coverage=1 00:18:13.772 --rc genhtml_function_coverage=1 00:18:13.772 --rc genhtml_legend=1 00:18:13.772 --rc geninfo_all_blocks=1 00:18:13.772 --rc geninfo_unexecuted_blocks=1 00:18:13.772 00:18:13.772 ' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.772 --rc genhtml_branch_coverage=1 00:18:13.772 --rc genhtml_function_coverage=1 00:18:13.772 --rc genhtml_legend=1 00:18:13.772 --rc geninfo_all_blocks=1 00:18:13.772 --rc geninfo_unexecuted_blocks=1 00:18:13.772 00:18:13.772 ' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.772 --rc genhtml_branch_coverage=1 00:18:13.772 --rc genhtml_function_coverage=1 00:18:13.772 --rc genhtml_legend=1 00:18:13.772 --rc geninfo_all_blocks=1 00:18:13.772 --rc geninfo_unexecuted_blocks=1 00:18:13.772 00:18:13.772 ' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.772 --rc genhtml_branch_coverage=1 00:18:13.772 --rc genhtml_function_coverage=1 00:18:13.772 --rc genhtml_legend=1 00:18:13.772 --rc geninfo_all_blocks=1 00:18:13.772 --rc geninfo_unexecuted_blocks=1 00:18:13.772 00:18:13.772 ' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.772 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=239023 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 239023' 00:18:13.773 Process pid: 239023 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 239023 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 239023 ']' 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:13.773 [2024-12-07 00:45:29.603381] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:13.773 [2024-12-07 00:45:29.603474] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.773 [2024-12-07 00:45:29.673518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.773 [2024-12-07 00:45:29.723769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.773 [2024-12-07 00:45:29.723826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.773 [2024-12-07 00:45:29.723854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.773 [2024-12-07 00:45:29.723864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.773 [2024-12-07 00:45:29.723873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.773 [2024-12-07 00:45:29.729018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.773 [2024-12-07 00:45:29.729093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.773 [2024-12-07 00:45:29.733096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.773 [2024-12-07 00:45:29.733102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:13.773 00:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:14.715 00:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:15.285 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:15.285 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:15.285 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:15.285 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:15.285 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:15.544 Malloc1 00:18:15.544 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:15.801 00:45:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:16.059 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:16.317 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:16.317 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:16.317 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:16.575 Malloc2 00:18:16.575 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:16.832 00:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:17.091 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:17.350 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:17.611 [2024-12-07 00:45:33.511526] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:17.611 [2024-12-07 00:45:33.511563] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239561 ] 00:18:17.611 [2024-12-07 00:45:33.561055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:17.611 [2024-12-07 00:45:33.570429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:17.611 [2024-12-07 00:45:33.570457] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f72356bb000 00:18:17.611 [2024-12-07 00:45:33.571424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.572417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.573427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.574430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.575435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.576442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.577453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.578456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:17.611 [2024-12-07 00:45:33.579466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:17.611 [2024-12-07 00:45:33.579485] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f72343b3000 00:18:17.611 [2024-12-07 00:45:33.580602] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:17.611 [2024-12-07 00:45:33.596329] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:17.611 [2024-12-07 00:45:33.596371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:17.611 [2024-12-07 00:45:33.598574] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:17.611 [2024-12-07 00:45:33.598626] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:17.611 [2024-12-07 00:45:33.598715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:17.611 [2024-12-07 00:45:33.598746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:17.611 [2024-12-07 00:45:33.598758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:17.611 [2024-12-07 00:45:33.599575] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:17.611 [2024-12-07 00:45:33.599599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:17.611 [2024-12-07 00:45:33.599613] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:17.611 [2024-12-07 00:45:33.600579] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:17.611 [2024-12-07 00:45:33.600599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:17.611 [2024-12-07 00:45:33.600613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.601579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:17.611 [2024-12-07 00:45:33.601597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.602585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:17.611 [2024-12-07 00:45:33.602602] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:17.611 [2024-12-07 00:45:33.602616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.602628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.602737] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:17.611 [2024-12-07 00:45:33.602745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.602753] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:17.611 [2024-12-07 00:45:33.607017] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:17.611 [2024-12-07 00:45:33.607609] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:17.611 [2024-12-07 00:45:33.608618] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:17.611 [2024-12-07 00:45:33.609607] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.611 [2024-12-07 00:45:33.609715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:17.611 [2024-12-07 00:45:33.610626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:17.611 [2024-12-07 00:45:33.610643] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:17.611 [2024-12-07 00:45:33.610652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:17.611 [2024-12-07 00:45:33.610676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:17.611 [2024-12-07 00:45:33.610693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:17.611 [2024-12-07 00:45:33.610722] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:17.611 [2024-12-07 00:45:33.610732] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:17.611 [2024-12-07 00:45:33.610739] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.611 [2024-12-07 00:45:33.610758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:17.611 [2024-12-07 00:45:33.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:17.611 [2024-12-07 00:45:33.610834] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:17.611 [2024-12-07 00:45:33.610843] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:17.611 [2024-12-07 00:45:33.610850] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:17.611 [2024-12-07 00:45:33.610858] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:17.611 [2024-12-07 00:45:33.610865] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:17.611 [2024-12-07 00:45:33.610877] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:17.611 [2024-12-07 00:45:33.610885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:17.611 [2024-12-07 00:45:33.610897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:17.611 [2024-12-07 00:45:33.610912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:17.611 [2024-12-07 00:45:33.610929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.610946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.612 [2024-12-07 00:45:33.610958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.612 [2024-12-07 00:45:33.610970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.612 [2024-12-07 00:45:33.611005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.612 [2024-12-07 00:45:33.611016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611073] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:17.612 [2024-12-07 00:45:33.611081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:17.612 [2024-12-07 00:45:33.611242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:17.612 [2024-12-07 00:45:33.611248] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611295] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:17.612 [2024-12-07 00:45:33.611332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611360] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:17.612 [2024-12-07 00:45:33.611383] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:17.612 [2024-12-07 00:45:33.611389] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:17.612 [2024-12-07 00:45:33.611488] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:17.612 [2024-12-07 00:45:33.611494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611591] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:17.612 [2024-12-07 00:45:33.611598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:17.612 [2024-12-07 00:45:33.611606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:17.612 [2024-12-07 00:45:33.611635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:17.612 [2024-12-07 00:45:33.611761] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:17.612 [2024-12-07 00:45:33.611770] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:17.612 [2024-12-07 00:45:33.611776] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:17.612 [2024-12-07 00:45:33.611782] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:17.612 [2024-12-07 00:45:33.611787] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:17.612 [2024-12-07 00:45:33.611796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:17.612 [2024-12-07 00:45:33.611808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:17.612 [2024-12-07 00:45:33.611816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:17.612 [2024-12-07 00:45:33.611821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:17.612 [2024-12-07 00:45:33.611848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:17.612 [2024-12-07 00:45:33.611854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:17.612 [2024-12-07 00:45:33.611874] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:17.612 [2024-12-07 00:45:33.611881] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:17.612 [2024-12-07 00:45:33.611887] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:17.612 [2024-12-07 00:45:33.611895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:17.613 [2024-12-07 00:45:33.611906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:17.613 [2024-12-07 00:45:33.611925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:17.613 [2024-12-07 00:45:33.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:17.613 [2024-12-07 00:45:33.611958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:17.613 ===================================================== 00:18:17.613 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:17.613 ===================================================== 00:18:17.613 Controller Capabilities/Features 00:18:17.613 ================================ 00:18:17.613 Vendor ID: 4e58 00:18:17.613 Subsystem Vendor ID: 4e58 00:18:17.613 Serial Number: SPDK1 00:18:17.613 Model Number: SPDK bdev Controller 00:18:17.613 Firmware Version: 25.01 00:18:17.613 Recommended Arb Burst: 6 00:18:17.613 IEEE OUI Identifier: 8d 6b 50 00:18:17.613 Multi-path I/O 00:18:17.613 May have multiple subsystem ports: Yes 00:18:17.613 May have multiple controllers: Yes 00:18:17.613 Associated with SR-IOV VF: No 00:18:17.613 Max Data Transfer Size: 131072 00:18:17.613 Max Number of Namespaces: 32 00:18:17.613 Max Number of I/O Queues: 127 00:18:17.613 NVMe Specification Version (VS): 1.3 00:18:17.613 NVMe Specification Version (Identify): 1.3 00:18:17.613 Maximum Queue Entries: 256 00:18:17.613 Contiguous Queues Required: Yes 00:18:17.613 Arbitration Mechanisms Supported 00:18:17.613 Weighted Round Robin: Not Supported 00:18:17.613 Vendor Specific: Not Supported 00:18:17.613 Reset Timeout: 15000 ms 00:18:17.613 Doorbell Stride: 4 bytes 00:18:17.613 NVM Subsystem Reset: Not Supported 00:18:17.613 Command Sets Supported 00:18:17.613 NVM Command Set: Supported 00:18:17.613 Boot Partition: Not Supported 00:18:17.613 Memory Page Size Minimum: 4096 bytes 00:18:17.613 Memory Page Size Maximum: 4096 bytes 00:18:17.613 Persistent Memory Region: Not Supported 00:18:17.613 Optional Asynchronous Events Supported 00:18:17.613 Namespace Attribute Notices: Supported 00:18:17.613 Firmware Activation Notices: Not Supported 00:18:17.613 ANA Change Notices: Not Supported 00:18:17.613 PLE Aggregate Log Change Notices: Not Supported 00:18:17.613 LBA Status Info Alert Notices: Not Supported 00:18:17.613 EGE Aggregate Log Change Notices: Not Supported 00:18:17.613 Normal NVM Subsystem Shutdown event: Not Supported 00:18:17.613 Zone Descriptor Change Notices: Not Supported 00:18:17.613 Discovery Log Change Notices: Not Supported 00:18:17.613 Controller Attributes 00:18:17.613 128-bit Host Identifier: Supported 00:18:17.613 Non-Operational Permissive Mode: Not Supported 00:18:17.613 NVM Sets: Not Supported 00:18:17.613 Read Recovery Levels: Not Supported 00:18:17.613 Endurance Groups: Not Supported 00:18:17.613 Predictable Latency Mode: Not Supported 00:18:17.613 Traffic Based Keep ALive: Not Supported 00:18:17.613 Namespace Granularity: Not Supported 00:18:17.613 SQ Associations: Not Supported 00:18:17.613 UUID List: Not Supported 00:18:17.613 Multi-Domain Subsystem: Not Supported 00:18:17.613 Fixed Capacity Management: Not Supported 00:18:17.613 Variable Capacity Management: Not Supported 00:18:17.613 Delete Endurance Group: Not Supported 00:18:17.613 Delete NVM Set: Not Supported 00:18:17.613 Extended LBA Formats Supported: Not Supported 00:18:17.613 Flexible Data Placement Supported: Not Supported 00:18:17.613 00:18:17.613 Controller Memory Buffer Support 00:18:17.613 ================================ 00:18:17.613 Supported: No 00:18:17.613 00:18:17.613 Persistent Memory Region Support 00:18:17.613 ================================ 00:18:17.613 Supported: No 00:18:17.613 00:18:17.613 Admin Command Set Attributes 00:18:17.613 ============================ 00:18:17.613 Security Send/Receive: Not Supported 00:18:17.613 Format NVM: Not Supported 00:18:17.613 Firmware Activate/Download: Not Supported 00:18:17.613 Namespace Management: Not Supported 00:18:17.613 Device Self-Test: Not Supported 00:18:17.613 Directives: Not Supported 00:18:17.613 NVMe-MI: Not Supported 00:18:17.613 Virtualization Management: Not Supported 00:18:17.613 Doorbell Buffer Config: Not Supported 00:18:17.613 Get LBA Status Capability: Not Supported 00:18:17.613 Command & Feature Lockdown Capability: Not Supported 00:18:17.613 Abort Command Limit: 4 00:18:17.613 Async Event Request Limit: 4 00:18:17.613 Number of Firmware Slots: N/A 00:18:17.613 Firmware Slot 1 Read-Only: N/A 00:18:17.613 Firmware Activation Without Reset: N/A 00:18:17.613 Multiple Update Detection Support: N/A 00:18:17.613 Firmware Update Granularity: No Information Provided 00:18:17.613 Per-Namespace SMART Log: No 00:18:17.613 Asymmetric Namespace Access Log Page: Not Supported 00:18:17.613 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:17.613 Command Effects Log Page: Supported 00:18:17.613 Get Log Page Extended Data: Supported 00:18:17.613 Telemetry Log Pages: Not Supported 00:18:17.613 Persistent Event Log Pages: Not Supported 00:18:17.613 Supported Log Pages Log Page: May Support 00:18:17.613 Commands Supported & Effects Log Page: Not Supported 00:18:17.613 Feature Identifiers & Effects Log Page:May Support 00:18:17.613 NVMe-MI Commands & Effects Log Page: May Support 00:18:17.613 Data Area 4 for Telemetry Log: Not Supported 00:18:17.613 Error Log Page Entries Supported: 128 00:18:17.613 Keep Alive: Supported 00:18:17.613 Keep Alive Granularity: 10000 ms 00:18:17.613 00:18:17.613 NVM Command Set Attributes 00:18:17.613 ========================== 00:18:17.613 Submission Queue Entry Size 00:18:17.613 Max: 64 00:18:17.613 Min: 64 00:18:17.613 Completion Queue Entry Size 00:18:17.613 Max: 16 00:18:17.613 Min: 16 00:18:17.613 Number of Namespaces: 32 00:18:17.613 Compare Command: Supported 00:18:17.613 Write Uncorrectable Command: Not Supported 00:18:17.613 Dataset Management Command: Supported 00:18:17.613 Write Zeroes Command: Supported 00:18:17.613 Set Features Save Field: Not Supported 00:18:17.613 Reservations: Not Supported 00:18:17.613 Timestamp: Not Supported 00:18:17.613 Copy: Supported 00:18:17.613 Volatile Write Cache: Present 00:18:17.613 Atomic Write Unit (Normal): 1 00:18:17.613 Atomic Write Unit (PFail): 1 00:18:17.613 Atomic Compare & Write Unit: 1 00:18:17.613 Fused Compare & Write: Supported 00:18:17.613 Scatter-Gather List 00:18:17.613 SGL Command Set: Supported (Dword aligned) 00:18:17.613 SGL Keyed: Not Supported 00:18:17.613 SGL Bit Bucket Descriptor: Not Supported 00:18:17.613 SGL Metadata Pointer: Not Supported 00:18:17.613 Oversized SGL: Not Supported 00:18:17.613 SGL Metadata Address: Not Supported 00:18:17.613 SGL Offset: Not Supported 00:18:17.613 Transport SGL Data Block: Not Supported 00:18:17.613 Replay Protected Memory Block: Not Supported 00:18:17.613 00:18:17.613 Firmware Slot Information 00:18:17.613 ========================= 00:18:17.613 Active slot: 1 00:18:17.613 Slot 1 Firmware Revision: 25.01 00:18:17.613 00:18:17.613 00:18:17.613 Commands Supported and Effects 00:18:17.613 ============================== 00:18:17.613 Admin Commands 00:18:17.613 -------------- 00:18:17.613 Get Log Page (02h): Supported 00:18:17.613 Identify (06h): Supported 00:18:17.613 Abort (08h): Supported 00:18:17.613 Set Features (09h): Supported 00:18:17.613 Get Features (0Ah): Supported 00:18:17.613 Asynchronous Event Request (0Ch): Supported 00:18:17.613 Keep Alive (18h): Supported 00:18:17.613 I/O Commands 00:18:17.613 ------------ 00:18:17.613 Flush (00h): Supported LBA-Change 00:18:17.613 Write (01h): Supported LBA-Change 00:18:17.613 Read (02h): Supported 00:18:17.613 Compare (05h): Supported 00:18:17.613 Write Zeroes (08h): Supported LBA-Change 00:18:17.613 Dataset Management (09h): Supported LBA-Change 00:18:17.613 Copy (19h): Supported LBA-Change 00:18:17.613 00:18:17.613 Error Log 00:18:17.613 ========= 00:18:17.613 00:18:17.613 Arbitration 00:18:17.613 =========== 00:18:17.613 Arbitration Burst: 1 00:18:17.613 00:18:17.613 Power Management 00:18:17.613 ================ 00:18:17.613 Number of Power States: 1 00:18:17.613 Current Power State: Power State #0 00:18:17.613 Power State #0: 00:18:17.613 Max Power: 0.00 W 00:18:17.613 Non-Operational State: Operational 00:18:17.613 Entry Latency: Not Reported 00:18:17.613 Exit Latency: Not Reported 00:18:17.613 Relative Read Throughput: 0 00:18:17.613 Relative Read Latency: 0 00:18:17.613 Relative Write Throughput: 0 00:18:17.613 Relative Write Latency: 0 00:18:17.614 Idle Power: Not Reported 00:18:17.614 Active Power: Not Reported 00:18:17.614 Non-Operational Permissive Mode: Not Supported 00:18:17.614 00:18:17.614 Health Information 00:18:17.614 ================== 00:18:17.614 Critical Warnings: 00:18:17.614 Available Spare Space: OK 00:18:17.614 Temperature: OK 00:18:17.614 Device Reliability: OK 00:18:17.614 Read Only: No 00:18:17.614 Volatile Memory Backup: OK 00:18:17.614 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:17.614 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:17.614 Available Spare: 0% 00:18:17.614 Available Sp[2024-12-07 00:45:33.612101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:17.614 [2024-12-07 00:45:33.612119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:17.614 [2024-12-07 00:45:33.612164] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:17.614 [2024-12-07 00:45:33.612183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.614 [2024-12-07 00:45:33.612194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.614 [2024-12-07 00:45:33.612204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.614 [2024-12-07 00:45:33.612214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.614 [2024-12-07 00:45:33.612643] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:17.614 [2024-12-07 00:45:33.612664] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:17.614 [2024-12-07 00:45:33.613642] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.614 [2024-12-07 00:45:33.613727] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:17.614 [2024-12-07 00:45:33.613741] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:17.614 [2024-12-07 00:45:33.614653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:17.614 [2024-12-07 00:45:33.614675] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:17.614 [2024-12-07 00:45:33.614729] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:17.614 [2024-12-07 00:45:33.616694] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:17.614 are Threshold: 0% 00:18:17.614 Life Percentage Used: 0% 00:18:17.614 Data Units Read: 0 00:18:17.614 Data Units Written: 0 00:18:17.614 Host Read Commands: 0 00:18:17.614 Host Write Commands: 0 00:18:17.614 Controller Busy Time: 0 minutes 00:18:17.614 Power Cycles: 0 00:18:17.614 Power On Hours: 0 hours 00:18:17.614 Unsafe Shutdowns: 0 00:18:17.614 Unrecoverable Media Errors: 0 00:18:17.614 Lifetime Error Log Entries: 0 00:18:17.614 Warning Temperature Time: 0 minutes 00:18:17.614 Critical Temperature Time: 0 minutes 00:18:17.614 00:18:17.614 Number of Queues 00:18:17.614 ================ 00:18:17.614 Number of I/O Submission Queues: 127 00:18:17.614 Number of I/O Completion Queues: 127 00:18:17.614 00:18:17.614 Active Namespaces 00:18:17.614 ================= 00:18:17.614 Namespace ID:1 00:18:17.614 Error Recovery Timeout: Unlimited 00:18:17.614 Command Set Identifier: NVM (00h) 00:18:17.614 Deallocate: Supported 00:18:17.614 Deallocated/Unwritten Error: Not Supported 00:18:17.614 Deallocated Read Value: Unknown 00:18:17.614 Deallocate in Write Zeroes: Not Supported 00:18:17.614 Deallocated Guard Field: 0xFFFF 00:18:17.614 Flush: Supported 00:18:17.614 Reservation: Supported 00:18:17.614 Namespace Sharing Capabilities: Multiple Controllers 00:18:17.614 Size (in LBAs): 131072 (0GiB) 00:18:17.614 Capacity (in LBAs): 131072 (0GiB) 00:18:17.614 Utilization (in LBAs): 131072 (0GiB) 00:18:17.614 NGUID: 7FC0233ABBDE41649A2D6F1B36B6F67E 00:18:17.614 UUID: 7fc0233a-bbde-4164-9a2d-6f1b36b6f67e 00:18:17.614 Thin Provisioning: Not Supported 00:18:17.614 Per-NS Atomic Units: Yes 00:18:17.614 Atomic Boundary Size (Normal): 0 00:18:17.614 Atomic Boundary Size (PFail): 0 00:18:17.614 Atomic Boundary Offset: 0 00:18:17.614 Maximum Single Source Range Length: 65535 00:18:17.614 Maximum Copy Length: 65535 00:18:17.614 Maximum Source Range Count: 1 00:18:17.614 NGUID/EUI64 Never Reused: No 00:18:17.614 Namespace Write Protected: No 00:18:17.614 Number of LBA Formats: 1 00:18:17.614 Current LBA Format: LBA Format #00 00:18:17.614 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:17.614 00:18:17.614 00:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:17.875 [2024-12-07 00:45:33.860853] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.158 Initializing NVMe Controllers 00:18:23.158 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.158 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:23.158 Initialization complete. Launching workers. 00:18:23.158 ======================================================== 00:18:23.158 Latency(us) 00:18:23.158 Device Information : IOPS MiB/s Average min max 00:18:23.158 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30775.38 120.22 4158.27 1215.08 7532.76 00:18:23.158 ======================================================== 00:18:23.158 Total : 30775.38 120.22 4158.27 1215.08 7532.76 00:18:23.158 00:18:23.158 [2024-12-07 00:45:38.879692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.158 00:45:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:23.158 [2024-12-07 00:45:39.131853] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.438 Initializing NVMe Controllers 00:18:28.438 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:28.438 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:28.438 Initialization complete. Launching workers. 00:18:28.438 ======================================================== 00:18:28.438 Latency(us) 00:18:28.438 Device Information : IOPS MiB/s Average min max 00:18:28.438 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16024.67 62.60 7998.52 7521.48 15972.61 00:18:28.438 ======================================================== 00:18:28.438 Total : 16024.67 62.60 7998.52 7521.48 15972.61 00:18:28.438 00:18:28.438 [2024-12-07 00:45:44.172887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:28.438 00:45:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:28.438 [2024-12-07 00:45:44.394935] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:33.717 [2024-12-07 00:45:49.474425] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:33.717 Initializing NVMe Controllers 00:18:33.717 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:33.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:33.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:33.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:33.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:33.717 Initialization complete. Launching workers. 00:18:33.717 Starting thread on core 2 00:18:33.717 Starting thread on core 3 00:18:33.717 Starting thread on core 1 00:18:33.717 00:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:33.717 [2024-12-07 00:45:49.806499] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.012 [2024-12-07 00:45:52.868574] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.012 Initializing NVMe Controllers 00:18:37.012 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.012 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.012 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:37.012 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:37.012 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:37.012 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:37.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:37.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:37.012 Initialization complete. Launching workers. 00:18:37.012 Starting thread on core 1 with urgent priority queue 00:18:37.012 Starting thread on core 2 with urgent priority queue 00:18:37.012 Starting thread on core 3 with urgent priority queue 00:18:37.012 Starting thread on core 0 with urgent priority queue 00:18:37.012 SPDK bdev Controller (SPDK1 ) core 0: 7172.00 IO/s 13.94 secs/100000 ios 00:18:37.012 SPDK bdev Controller (SPDK1 ) core 1: 6854.67 IO/s 14.59 secs/100000 ios 00:18:37.012 SPDK bdev Controller (SPDK1 ) core 2: 6160.33 IO/s 16.23 secs/100000 ios 00:18:37.012 SPDK bdev Controller (SPDK1 ) core 3: 7171.00 IO/s 13.95 secs/100000 ios 00:18:37.012 ======================================================== 00:18:37.012 00:18:37.012 00:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:37.271 [2024-12-07 00:45:53.190195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.272 Initializing NVMe Controllers 00:18:37.272 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.272 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.272 Namespace ID: 1 size: 0GB 00:18:37.272 Initialization complete. 00:18:37.272 INFO: using host memory buffer for IO 00:18:37.272 Hello world! 00:18:37.272 [2024-12-07 00:45:53.225850] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.272 00:45:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:37.530 [2024-12-07 00:45:53.535536] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:38.471 Initializing NVMe Controllers 00:18:38.471 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:38.471 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:38.472 Initialization complete. Launching workers. 00:18:38.472 submit (in ns) avg, min, max = 9771.2, 3508.9, 4016572.2 00:18:38.472 complete (in ns) avg, min, max = 27045.7, 2058.9, 4019402.2 00:18:38.472 00:18:38.472 Submit histogram 00:18:38.472 ================ 00:18:38.472 Range in us Cumulative Count 00:18:38.472 3.508 - 3.532: 0.4918% ( 61) 00:18:38.472 3.532 - 3.556: 1.2173% ( 90) 00:18:38.472 3.556 - 3.579: 3.8052% ( 321) 00:18:38.472 3.579 - 3.603: 8.5376% ( 587) 00:18:38.472 3.603 - 3.627: 16.5108% ( 989) 00:18:38.472 3.627 - 3.650: 25.1774% ( 1075) 00:18:38.472 3.650 - 3.674: 33.7875% ( 1068) 00:18:38.472 3.674 - 3.698: 41.4382% ( 949) 00:18:38.472 3.698 - 3.721: 48.3473% ( 857) 00:18:38.472 3.721 - 3.745: 53.6037% ( 652) 00:18:38.472 3.745 - 3.769: 57.1751% ( 443) 00:18:38.472 3.769 - 3.793: 61.4882% ( 535) 00:18:38.472 3.793 - 3.816: 64.9549% ( 430) 00:18:38.472 3.816 - 3.840: 68.8326% ( 481) 00:18:38.472 3.840 - 3.864: 72.9281% ( 508) 00:18:38.472 3.864 - 3.887: 77.2009% ( 530) 00:18:38.472 3.887 - 3.911: 81.3609% ( 516) 00:18:38.472 3.911 - 3.935: 84.3921% ( 376) 00:18:38.472 3.935 - 3.959: 86.3995% ( 249) 00:18:38.472 3.959 - 3.982: 88.2699% ( 232) 00:18:38.472 3.982 - 4.006: 89.7614% ( 185) 00:18:38.472 4.006 - 4.030: 91.1722% ( 175) 00:18:38.472 4.030 - 4.053: 92.4137% ( 154) 00:18:38.472 4.053 - 4.077: 93.5021% ( 135) 00:18:38.472 4.077 - 4.101: 94.3405% ( 104) 00:18:38.472 4.101 - 4.124: 95.1387% ( 99) 00:18:38.472 4.124 - 4.148: 95.6708% ( 66) 00:18:38.472 4.148 - 4.172: 96.0335% ( 45) 00:18:38.472 4.172 - 4.196: 96.2996% ( 33) 00:18:38.472 4.196 - 4.219: 96.4689% ( 21) 00:18:38.472 4.219 - 4.243: 96.5817% ( 14) 00:18:38.472 4.243 - 4.267: 96.7188% ( 17) 00:18:38.472 4.267 - 4.290: 96.8075% ( 11) 00:18:38.472 4.290 - 4.314: 96.9042% ( 12) 00:18:38.472 4.314 - 4.338: 96.9203% ( 2) 00:18:38.472 4.338 - 4.361: 96.9526% ( 4) 00:18:38.472 4.361 - 4.385: 97.0252% ( 9) 00:18:38.472 4.385 - 4.409: 97.1138% ( 11) 00:18:38.472 4.409 - 4.433: 97.1622% ( 6) 00:18:38.472 4.433 - 4.456: 97.1945% ( 4) 00:18:38.472 4.456 - 4.480: 97.2106% ( 2) 00:18:38.472 4.480 - 4.504: 97.2589% ( 6) 00:18:38.472 4.504 - 4.527: 97.2751% ( 2) 00:18:38.472 4.527 - 4.551: 97.2831% ( 1) 00:18:38.472 4.551 - 4.575: 97.2912% ( 1) 00:18:38.472 4.575 - 4.599: 97.2993% ( 1) 00:18:38.472 4.599 - 4.622: 97.3154% ( 2) 00:18:38.472 4.622 - 4.646: 97.3315% ( 2) 00:18:38.472 4.646 - 4.670: 97.3396% ( 1) 00:18:38.472 4.670 - 4.693: 97.3557% ( 2) 00:18:38.472 4.693 - 4.717: 97.3879% ( 4) 00:18:38.472 4.717 - 4.741: 97.4121% ( 3) 00:18:38.472 4.741 - 4.764: 97.4524% ( 5) 00:18:38.472 4.764 - 4.788: 97.5250% ( 9) 00:18:38.472 4.788 - 4.812: 97.5653% ( 5) 00:18:38.472 4.812 - 4.836: 97.6137% ( 6) 00:18:38.472 4.836 - 4.859: 97.6459% ( 4) 00:18:38.472 4.859 - 4.883: 97.7024% ( 7) 00:18:38.472 4.883 - 4.907: 97.7507% ( 6) 00:18:38.472 4.907 - 4.930: 97.7668% ( 2) 00:18:38.472 4.930 - 4.954: 97.7910% ( 3) 00:18:38.472 4.954 - 4.978: 97.8233% ( 4) 00:18:38.472 4.978 - 5.001: 97.8636% ( 5) 00:18:38.472 5.001 - 5.025: 97.9281% ( 8) 00:18:38.472 5.025 - 5.049: 97.9845% ( 7) 00:18:38.472 5.073 - 5.096: 97.9926% ( 1) 00:18:38.472 5.096 - 5.120: 98.0006% ( 1) 00:18:38.472 5.120 - 5.144: 98.0087% ( 1) 00:18:38.472 5.144 - 5.167: 98.0329% ( 3) 00:18:38.472 5.167 - 5.191: 98.0651% ( 4) 00:18:38.472 5.191 - 5.215: 98.0813% ( 2) 00:18:38.472 5.239 - 5.262: 98.0893% ( 1) 00:18:38.472 5.262 - 5.286: 98.1054% ( 2) 00:18:38.472 5.381 - 5.404: 98.1135% ( 1) 00:18:38.472 5.404 - 5.428: 98.1216% ( 1) 00:18:38.472 5.499 - 5.523: 98.1296% ( 1) 00:18:38.472 5.570 - 5.594: 98.1377% ( 1) 00:18:38.472 5.594 - 5.618: 98.1538% ( 2) 00:18:38.472 5.807 - 5.831: 98.1619% ( 1) 00:18:38.472 5.831 - 5.855: 98.1699% ( 1) 00:18:38.472 5.879 - 5.902: 98.1780% ( 1) 00:18:38.472 6.116 - 6.163: 98.1941% ( 2) 00:18:38.472 6.305 - 6.353: 98.2022% ( 1) 00:18:38.472 6.495 - 6.542: 98.2103% ( 1) 00:18:38.472 6.732 - 6.779: 98.2183% ( 1) 00:18:38.472 6.779 - 6.827: 98.2264% ( 1) 00:18:38.472 6.827 - 6.874: 98.2506% ( 3) 00:18:38.472 6.921 - 6.969: 98.2586% ( 1) 00:18:38.472 7.016 - 7.064: 98.2667% ( 1) 00:18:38.472 7.064 - 7.111: 98.2828% ( 2) 00:18:38.472 7.111 - 7.159: 98.3070% ( 3) 00:18:38.472 7.159 - 7.206: 98.3231% ( 2) 00:18:38.472 7.253 - 7.301: 98.3312% ( 1) 00:18:38.472 7.301 - 7.348: 98.3473% ( 2) 00:18:38.472 7.396 - 7.443: 98.3554% ( 1) 00:18:38.472 7.443 - 7.490: 98.3715% ( 2) 00:18:38.472 7.490 - 7.538: 98.3796% ( 1) 00:18:38.472 7.633 - 7.680: 98.3876% ( 1) 00:18:38.472 7.775 - 7.822: 98.4118% ( 3) 00:18:38.472 7.822 - 7.870: 98.4360% ( 3) 00:18:38.472 8.059 - 8.107: 98.4521% ( 2) 00:18:38.472 8.107 - 8.154: 98.4602% ( 1) 00:18:38.472 8.154 - 8.201: 98.4682% ( 1) 00:18:38.472 8.439 - 8.486: 98.4844% ( 2) 00:18:38.472 8.581 - 8.628: 98.5005% ( 2) 00:18:38.472 8.676 - 8.723: 98.5085% ( 1) 00:18:38.472 8.723 - 8.770: 98.5166% ( 1) 00:18:38.472 8.818 - 8.865: 98.5247% ( 1) 00:18:38.472 9.055 - 9.102: 98.5327% ( 1) 00:18:38.472 9.150 - 9.197: 98.5569% ( 3) 00:18:38.472 9.197 - 9.244: 98.5650% ( 1) 00:18:38.472 9.387 - 9.434: 98.5811% ( 2) 00:18:38.472 9.576 - 9.624: 98.5892% ( 1) 00:18:38.472 10.287 - 10.335: 98.5972% ( 1) 00:18:38.472 10.667 - 10.714: 98.6053% ( 1) 00:18:38.472 11.093 - 11.141: 98.6134% ( 1) 00:18:38.472 11.330 - 11.378: 98.6214% ( 1) 00:18:38.472 11.615 - 11.662: 98.6295% ( 1) 00:18:38.472 11.804 - 11.852: 98.6456% ( 2) 00:18:38.472 12.041 - 12.089: 98.6537% ( 1) 00:18:38.472 12.089 - 12.136: 98.6617% ( 1) 00:18:38.472 12.705 - 12.800: 98.6698% ( 1) 00:18:38.472 12.800 - 12.895: 98.6778% ( 1) 00:18:38.472 12.895 - 12.990: 98.6940% ( 2) 00:18:38.472 13.084 - 13.179: 98.7101% ( 2) 00:18:38.472 13.179 - 13.274: 98.7182% ( 1) 00:18:38.472 13.369 - 13.464: 98.7262% ( 1) 00:18:38.472 13.653 - 13.748: 98.7343% ( 1) 00:18:38.472 13.938 - 14.033: 98.7423% ( 1) 00:18:38.472 14.317 - 14.412: 98.7504% ( 1) 00:18:38.472 14.507 - 14.601: 98.7665% ( 2) 00:18:38.472 14.601 - 14.696: 98.7827% ( 2) 00:18:38.472 14.696 - 14.791: 98.7907% ( 1) 00:18:38.472 15.076 - 15.170: 98.7988% ( 1) 00:18:38.472 15.929 - 16.024: 98.8068% ( 1) 00:18:38.472 16.972 - 17.067: 98.8149% ( 1) 00:18:38.472 17.161 - 17.256: 98.8471% ( 4) 00:18:38.472 17.256 - 17.351: 98.8633% ( 2) 00:18:38.472 17.351 - 17.446: 98.9036% ( 5) 00:18:38.472 17.446 - 17.541: 98.9358% ( 4) 00:18:38.472 17.541 - 17.636: 98.9600% ( 3) 00:18:38.472 17.636 - 17.730: 99.0164% ( 7) 00:18:38.472 17.730 - 17.825: 99.0809% ( 8) 00:18:38.472 17.825 - 17.920: 99.1132% ( 4) 00:18:38.472 17.920 - 18.015: 99.1777% ( 8) 00:18:38.472 18.015 - 18.110: 99.2341% ( 7) 00:18:38.472 18.110 - 18.204: 99.2825% ( 6) 00:18:38.472 18.204 - 18.299: 99.3470% ( 8) 00:18:38.472 18.299 - 18.394: 99.4276% ( 10) 00:18:38.472 18.394 - 18.489: 99.5243% ( 12) 00:18:38.472 18.489 - 18.584: 99.5566% ( 4) 00:18:38.472 18.584 - 18.679: 99.5969% ( 5) 00:18:38.472 18.679 - 18.773: 99.6372% ( 5) 00:18:38.472 18.773 - 18.868: 99.6614% ( 3) 00:18:38.472 18.868 - 18.963: 99.6856% ( 3) 00:18:38.472 18.963 - 19.058: 99.7098% ( 3) 00:18:38.472 19.058 - 19.153: 99.7259% ( 2) 00:18:38.472 19.153 - 19.247: 99.7420% ( 2) 00:18:38.472 19.342 - 19.437: 99.7662% ( 3) 00:18:38.472 19.437 - 19.532: 99.7743% ( 1) 00:18:38.472 19.721 - 19.816: 99.7823% ( 1) 00:18:38.472 19.911 - 20.006: 99.7904% ( 1) 00:18:38.472 20.575 - 20.670: 99.7985% ( 1) 00:18:38.472 21.428 - 21.523: 99.8065% ( 1) 00:18:38.472 22.661 - 22.756: 99.8146% ( 1) 00:18:38.472 23.230 - 23.324: 99.8226% ( 1) 00:18:38.472 23.893 - 23.988: 99.8307% ( 1) 00:18:38.472 26.738 - 26.927: 99.8388% ( 1) 00:18:38.472 31.858 - 32.047: 99.8468% ( 1) 00:18:38.472 34.892 - 35.081: 99.8549% ( 1) 00:18:38.472 3980.705 - 4004.978: 99.9839% ( 16) 00:18:38.472 4004.978 - 4029.250: 100.0000% ( 2) 00:18:38.472 00:18:38.472 Complete histogram 00:18:38.472 ================== 00:18:38.472 Range in us Cumulative Count 00:18:38.472 2.050 - 2.062: 0.0645% ( 8) 00:18:38.472 2.062 - 2.074: 21.6543% ( 2678) 00:18:38.472 2.074 - 2.086: 42.3654% ( 2569) 00:18:38.472 2.086 - 2.098: 43.9697% ( 199) 00:18:38.473 2.098 - 2.110: 54.8210% ( 1346) 00:18:38.473 2.110 - 2.121: 59.1906% ( 542) 00:18:38.473 2.121 - 2.133: 61.3431% ( 267) 00:18:38.473 2.133 - 2.145: 72.0735% ( 1331) 00:18:38.473 2.145 - 2.157: 75.9674% ( 483) 00:18:38.473 2.157 - 2.169: 77.4508% ( 184) 00:18:38.473 2.169 - 2.181: 80.7965% ( 415) 00:18:38.473 2.181 - 2.193: 81.7317% ( 116) 00:18:38.473 2.193 - 2.204: 82.5137% ( 97) 00:18:38.473 2.204 - 2.216: 86.4721% ( 491) 00:18:38.473 2.216 - 2.228: 88.5843% ( 262) 00:18:38.473 2.228 - 2.240: 90.8900% ( 286) 00:18:38.473 2.240 - 2.252: 92.6314% ( 216) 00:18:38.473 2.252 - 2.264: 93.0909% ( 57) 00:18:38.473 2.264 - 2.276: 93.3328% ( 30) 00:18:38.473 2.276 - 2.287: 93.7440% ( 51) 00:18:38.473 2.287 - 2.299: 94.2519% ( 63) 00:18:38.473 2.299 - 2.311: 94.9855% ( 91) 00:18:38.473 2.311 - 2.323: 95.2838% ( 37) 00:18:38.473 2.323 - 2.335: 95.3160% ( 4) 00:18:38.473 2.335 - 2.347: 95.3805% ( 8) 00:18:38.473 2.347 - 2.359: 95.5095% ( 16) 00:18:38.473 2.359 - 2.370: 95.8481% ( 42) 00:18:38.473 2.370 - 2.382: 96.2270% ( 47) 00:18:38.473 2.382 - 2.394: 96.6462% ( 52) 00:18:38.473 2.394 - 2.406: 96.9365% ( 36) 00:18:38.473 2.406 - 2.418: 97.1300% ( 24) 00:18:38.473 2.418 - 2.430: 97.3315% ( 25) 00:18:38.473 2.430 - 2.441: 97.4927% ( 20) 00:18:38.473 2.441 - 2.453: 97.6540% ( 20) 00:18:38.473 2.453 - 2.465: 97.7910% ( 17) 00:18:38.473 2.465 - 2.477: 97.9442% ( 19) 00:18:38.473 2.477 - 2.489: 98.1216% ( 22) 00:18:38.473 2.489 - 2.501: 98.1538% ( 4) 00:18:38.473 2.501 - 2.513: 98.2264% ( 9) 00:18:38.473 2.513 - 2.524: 98.2667% ( 5) 00:18:38.473 2.524 - 2.536: 98.2989% ( 4) 00:18:38.473 2.536 - 2.548: 98.3312% ( 4) 00:18:38.473 2.548 - 2.560: 98.3473% ( 2) 00:18:38.473 2.560 - 2.572: 98.3634% ( 2) 00:18:38.473 2.607 - 2.619: 98.3715% ( 1) 00:18:38.473 2.631 - 2.643: 98.3796% ( 1) 00:18:38.473 2.643 - 2.655: 98.3957% ( 2) 00:18:38.473 2.655 - 2.667: 98.4037% ( 1) 00:18:38.473 2.667 - 2.679: 98.4118% ( 1) 00:18:38.473 2.690 - 2.702: 98.4279% ( 2) 00:18:38.473 2.702 - 2.714: 98.4360% ( 1) 00:18:38.473 2.714 - 2.726: 98.4441% ( 1) 00:18:38.473 3.081 - 3.105: 98.4521% ( 1) 00:18:38.473 3.129 - 3.153: 98.4602% ( 1) 00:18:38.473 3.176 - 3.200: 98.4682% ( 1) 00:18:38.473 3.200 - 3.224: 98.4844% ( 2) 00:18:38.473 3.224 - 3.247: 98.5005% ( 2) 00:18:38.473 3.271 - 3.295: 98.5327% ( 4) 00:18:38.473 3.366 - 3.390: 98.5408% ( 1) 00:18:38.473 3.413 - 3.437: 98.5489% ( 1) 00:18:38.473 3.461 - 3.484: 98.5569% ( 1) 00:18:38.473 3.484 - 3.508: 98.5730% ( 2) 00:18:38.473 3.508 - 3.532: 98.5811% ( 1) 00:18:38.473 3.532 - 3.556: 98.5892% ( 1) 00:18:38.473 3.579 - 3.603: 98.5972% ( 1) 00:18:38.473 3.627 - 3.650: 98.6053% ( 1) 00:18:38.473 3.698 - 3.721: 98.6134% ( 1) 00:18:38.473 3.721 - 3.745: 98.6214% ( 1) 00:18:38.473 3.745 - 3.769: 98.6295% ( 1) 00:18:38.473 3.793 - 3.816: 98.6375% ( 1) 00:18:38.473 3.816 - 3.840: 98.6537% ( 2) 00:18:38.473 3.911 - 3.935: 98.6617% ( 1) 00:18:38.473 4.006 - 4.030: 98.6698% ( 1) 00:18:38.473 4.978 - 5.001: 98.6778% ( 1) 00:18:38.473 5.025 - 5.049: 98.6940% ( 2) 00:18:38.473 5.049 - 5.073: 98.7020% ( 1) 00:18:38.473 5.191 - 5.215: 98.7101% ( 1) 00:18:38.473 5.784 - 5.807: 98.7182% ( 1) 00:18:38.473 5.807 - 5.831: 98.7262% ( 1) 00:18:38.473 6.068 - 6.116: 9[2024-12-07 00:45:54.557790] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:38.473 8.7343% ( 1) 00:18:38.473 6.163 - 6.210: 98.7423% ( 1) 00:18:38.473 6.210 - 6.258: 98.7504% ( 1) 00:18:38.473 6.258 - 6.305: 98.7665% ( 2) 00:18:38.473 6.305 - 6.353: 98.7827% ( 2) 00:18:38.473 6.400 - 6.447: 98.7907% ( 1) 00:18:38.473 6.779 - 6.827: 98.7988% ( 1) 00:18:38.473 7.490 - 7.538: 98.8149% ( 2) 00:18:38.473 15.360 - 15.455: 98.8230% ( 1) 00:18:38.473 15.550 - 15.644: 98.8310% ( 1) 00:18:38.473 15.644 - 15.739: 98.8552% ( 3) 00:18:38.473 15.739 - 15.834: 98.8713% ( 2) 00:18:38.473 15.834 - 15.929: 98.9036% ( 4) 00:18:38.473 15.929 - 16.024: 98.9197% ( 2) 00:18:38.473 16.024 - 16.119: 98.9439% ( 3) 00:18:38.473 16.119 - 16.213: 98.9600% ( 2) 00:18:38.473 16.213 - 16.308: 98.9681% ( 1) 00:18:38.473 16.308 - 16.403: 98.9923% ( 3) 00:18:38.473 16.403 - 16.498: 99.0406% ( 6) 00:18:38.473 16.498 - 16.593: 99.0890% ( 6) 00:18:38.473 16.593 - 16.687: 99.1777% ( 11) 00:18:38.473 16.687 - 16.782: 99.2261% ( 6) 00:18:38.473 16.782 - 16.877: 99.2664% ( 5) 00:18:38.473 16.877 - 16.972: 99.2744% ( 1) 00:18:38.473 16.972 - 17.067: 99.2825% ( 1) 00:18:38.473 17.067 - 17.161: 99.2906% ( 1) 00:18:38.473 17.161 - 17.256: 99.2986% ( 1) 00:18:38.473 17.446 - 17.541: 99.3067% ( 1) 00:18:38.473 17.541 - 17.636: 99.3309% ( 3) 00:18:38.473 17.636 - 17.730: 99.3389% ( 1) 00:18:38.473 18.015 - 18.110: 99.3470% ( 1) 00:18:38.473 19.058 - 19.153: 99.3550% ( 1) 00:18:38.473 23.040 - 23.135: 99.3631% ( 1) 00:18:38.473 34.513 - 34.702: 99.3712% ( 1) 00:18:38.473 135.016 - 135.775: 99.3792% ( 1) 00:18:38.473 3883.615 - 3907.887: 99.3873% ( 1) 00:18:38.473 3980.705 - 4004.978: 99.9274% ( 67) 00:18:38.473 4004.978 - 4029.250: 100.0000% ( 9) 00:18:38.473 00:18:38.473 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:38.473 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:38.473 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:38.473 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:38.473 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:39.040 [ 00:18:39.040 { 00:18:39.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:39.040 "subtype": "Discovery", 00:18:39.040 "listen_addresses": [], 00:18:39.040 "allow_any_host": true, 00:18:39.040 "hosts": [] 00:18:39.040 }, 00:18:39.040 { 00:18:39.040 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:39.040 "subtype": "NVMe", 00:18:39.040 "listen_addresses": [ 00:18:39.040 { 00:18:39.040 "trtype": "VFIOUSER", 00:18:39.040 "adrfam": "IPv4", 00:18:39.040 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:39.040 "trsvcid": "0" 00:18:39.040 } 00:18:39.040 ], 00:18:39.040 "allow_any_host": true, 00:18:39.040 "hosts": [], 00:18:39.040 "serial_number": "SPDK1", 00:18:39.040 "model_number": "SPDK bdev Controller", 00:18:39.040 "max_namespaces": 32, 00:18:39.040 "min_cntlid": 1, 00:18:39.040 "max_cntlid": 65519, 00:18:39.040 "namespaces": [ 00:18:39.040 { 00:18:39.040 "nsid": 1, 00:18:39.040 "bdev_name": "Malloc1", 00:18:39.040 "name": "Malloc1", 00:18:39.040 "nguid": "7FC0233ABBDE41649A2D6F1B36B6F67E", 00:18:39.040 "uuid": "7fc0233a-bbde-4164-9a2d-6f1b36b6f67e" 00:18:39.040 } 00:18:39.040 ] 00:18:39.040 }, 00:18:39.040 { 00:18:39.040 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:39.040 "subtype": "NVMe", 00:18:39.040 "listen_addresses": [ 00:18:39.040 { 00:18:39.040 "trtype": "VFIOUSER", 00:18:39.040 "adrfam": "IPv4", 00:18:39.040 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:39.040 "trsvcid": "0" 00:18:39.040 } 00:18:39.040 ], 00:18:39.040 "allow_any_host": true, 00:18:39.040 "hosts": [], 00:18:39.040 "serial_number": "SPDK2", 00:18:39.040 "model_number": "SPDK bdev Controller", 00:18:39.040 "max_namespaces": 32, 00:18:39.040 "min_cntlid": 1, 00:18:39.040 "max_cntlid": 65519, 00:18:39.040 "namespaces": [ 00:18:39.040 { 00:18:39.040 "nsid": 1, 00:18:39.040 "bdev_name": "Malloc2", 00:18:39.040 "name": "Malloc2", 00:18:39.040 "nguid": "1ECEEC4D0A8040789F3479D7B53B474C", 00:18:39.040 "uuid": "1eceec4d-0a80-4078-9f34-79d7b53b474c" 00:18:39.040 } 00:18:39.040 ] 00:18:39.040 } 00:18:39.040 ] 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=242080 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:39.040 00:45:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:39.040 [2024-12-07 00:45:55.082593] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:39.040 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:39.298 Malloc3 00:18:39.298 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:39.556 [2024-12-07 00:45:55.664101] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.556 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:39.814 Asynchronous Event Request test 00:18:39.814 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.814 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.814 Registering asynchronous event callbacks... 00:18:39.814 Starting namespace attribute notice tests for all controllers... 00:18:39.814 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:39.814 aer_cb - Changed Namespace 00:18:39.814 Cleaning up... 00:18:39.814 [ 00:18:39.814 { 00:18:39.814 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:39.814 "subtype": "Discovery", 00:18:39.814 "listen_addresses": [], 00:18:39.814 "allow_any_host": true, 00:18:39.814 "hosts": [] 00:18:39.814 }, 00:18:39.814 { 00:18:39.814 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:39.814 "subtype": "NVMe", 00:18:39.814 "listen_addresses": [ 00:18:39.814 { 00:18:39.814 "trtype": "VFIOUSER", 00:18:39.814 "adrfam": "IPv4", 00:18:39.814 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:39.814 "trsvcid": "0" 00:18:39.814 } 00:18:39.814 ], 00:18:39.814 "allow_any_host": true, 00:18:39.814 "hosts": [], 00:18:39.814 "serial_number": "SPDK1", 00:18:39.814 "model_number": "SPDK bdev Controller", 00:18:39.814 "max_namespaces": 32, 00:18:39.814 "min_cntlid": 1, 00:18:39.814 "max_cntlid": 65519, 00:18:39.814 "namespaces": [ 00:18:39.814 { 00:18:39.814 "nsid": 1, 00:18:39.814 "bdev_name": "Malloc1", 00:18:39.814 "name": "Malloc1", 00:18:39.814 "nguid": "7FC0233ABBDE41649A2D6F1B36B6F67E", 00:18:39.814 "uuid": "7fc0233a-bbde-4164-9a2d-6f1b36b6f67e" 00:18:39.814 }, 00:18:39.814 { 00:18:39.814 "nsid": 2, 00:18:39.814 "bdev_name": "Malloc3", 00:18:39.814 "name": "Malloc3", 00:18:39.814 "nguid": "5EDCB36F215846C99EEC9D78DAAB81B3", 00:18:39.814 "uuid": "5edcb36f-2158-46c9-9eec-9d78daab81b3" 00:18:39.814 } 00:18:39.814 ] 00:18:39.814 }, 00:18:39.814 { 00:18:39.814 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:39.814 "subtype": "NVMe", 00:18:39.814 "listen_addresses": [ 00:18:39.814 { 00:18:39.814 "trtype": "VFIOUSER", 00:18:39.814 "adrfam": "IPv4", 00:18:39.814 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:39.814 "trsvcid": "0" 00:18:39.814 } 00:18:39.814 ], 00:18:39.814 "allow_any_host": true, 00:18:39.814 "hosts": [], 00:18:39.814 "serial_number": "SPDK2", 00:18:39.814 "model_number": "SPDK bdev Controller", 00:18:39.814 "max_namespaces": 32, 00:18:39.814 "min_cntlid": 1, 00:18:39.814 "max_cntlid": 65519, 00:18:39.814 "namespaces": [ 00:18:39.814 { 00:18:39.814 "nsid": 1, 00:18:39.814 "bdev_name": "Malloc2", 00:18:39.814 "name": "Malloc2", 00:18:39.814 "nguid": "1ECEEC4D0A8040789F3479D7B53B474C", 00:18:39.814 "uuid": "1eceec4d-0a80-4078-9f34-79d7b53b474c" 00:18:39.814 } 00:18:39.814 ] 00:18:39.814 } 00:18:39.814 ] 00:18:39.814 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 242080 00:18:39.814 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:39.814 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:40.077 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:40.077 00:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:40.077 [2024-12-07 00:45:55.979472] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:40.077 [2024-12-07 00:45:55.979511] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242213 ] 00:18:40.077 [2024-12-07 00:45:56.026953] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:40.077 [2024-12-07 00:45:56.039299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:40.077 [2024-12-07 00:45:56.039329] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f99be4ac000 00:18:40.077 [2024-12-07 00:45:56.040300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.041307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.042307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.043301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.044306] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.045325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.046312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.047338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:40.078 [2024-12-07 00:45:56.048330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:40.078 [2024-12-07 00:45:56.048352] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f99bd1a4000 00:18:40.078 [2024-12-07 00:45:56.049491] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:40.078 [2024-12-07 00:45:56.065753] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:40.078 [2024-12-07 00:45:56.065794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:40.078 [2024-12-07 00:45:56.070910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:40.078 [2024-12-07 00:45:56.070963] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:40.078 [2024-12-07 00:45:56.071079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:40.078 [2024-12-07 00:45:56.071110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:40.078 [2024-12-07 00:45:56.071121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:40.078 [2024-12-07 00:45:56.071913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:40.078 [2024-12-07 00:45:56.071939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:40.078 [2024-12-07 00:45:56.071953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:40.078 [2024-12-07 00:45:56.072922] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:40.078 [2024-12-07 00:45:56.072943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:40.078 [2024-12-07 00:45:56.072956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.073926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:40.078 [2024-12-07 00:45:56.073946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.074928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:40.078 [2024-12-07 00:45:56.074947] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:40.078 [2024-12-07 00:45:56.074956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.074968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.075093] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:40.078 [2024-12-07 00:45:56.075105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.075114] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:40.078 [2024-12-07 00:45:56.075929] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:40.078 [2024-12-07 00:45:56.076939] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:40.078 [2024-12-07 00:45:56.077947] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:40.078 [2024-12-07 00:45:56.078942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.078 [2024-12-07 00:45:56.079030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:40.078 [2024-12-07 00:45:56.079974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:40.078 [2024-12-07 00:45:56.080014] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:40.078 [2024-12-07 00:45:56.080026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.080057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:40.078 [2024-12-07 00:45:56.080072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.080098] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.078 [2024-12-07 00:45:56.080109] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.078 [2024-12-07 00:45:56.080116] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.078 [2024-12-07 00:45:56.080135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.084018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.084046] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:40.078 [2024-12-07 00:45:56.084063] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:40.078 [2024-12-07 00:45:56.084070] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:40.078 [2024-12-07 00:45:56.084079] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:40.078 [2024-12-07 00:45:56.084087] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:40.078 [2024-12-07 00:45:56.084095] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:40.078 [2024-12-07 00:45:56.084103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.084115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.084132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.092014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.092039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.078 [2024-12-07 00:45:56.092062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.078 [2024-12-07 00:45:56.092074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.078 [2024-12-07 00:45:56.092086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.078 [2024-12-07 00:45:56.092099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.092116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.092131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.100049] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:40.078 [2024-12-07 00:45:56.100059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.100072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.100083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.100098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.108006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.108083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.108100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.108115] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:40.078 [2024-12-07 00:45:56.108124] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:40.078 [2024-12-07 00:45:56.108131] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.078 [2024-12-07 00:45:56.108140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.116008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.116032] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:40.078 [2024-12-07 00:45:56.116052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.116068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:40.078 [2024-12-07 00:45:56.116081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.078 [2024-12-07 00:45:56.116090] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.078 [2024-12-07 00:45:56.116096] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.078 [2024-12-07 00:45:56.116105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.078 [2024-12-07 00:45:56.124018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:40.078 [2024-12-07 00:45:56.124050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.124071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.124086] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:40.079 [2024-12-07 00:45:56.124094] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.079 [2024-12-07 00:45:56.124101] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.079 [2024-12-07 00:45:56.124110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.132023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.132045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132115] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:40.079 [2024-12-07 00:45:56.132123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:40.079 [2024-12-07 00:45:56.132132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:40.079 [2024-12-07 00:45:56.132158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.140023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.140049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.148021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.148047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.156008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.156034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.164004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.164037] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:40.079 [2024-12-07 00:45:56.164048] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:40.079 [2024-12-07 00:45:56.164058] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:40.079 [2024-12-07 00:45:56.164064] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:40.079 [2024-12-07 00:45:56.164070] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:40.079 [2024-12-07 00:45:56.164080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:40.079 [2024-12-07 00:45:56.164092] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:40.079 [2024-12-07 00:45:56.164100] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:40.079 [2024-12-07 00:45:56.164107] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.079 [2024-12-07 00:45:56.164116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.164127] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:40.079 [2024-12-07 00:45:56.164135] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:40.079 [2024-12-07 00:45:56.164140] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.079 [2024-12-07 00:45:56.164149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.164161] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:40.079 [2024-12-07 00:45:56.164169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:40.079 [2024-12-07 00:45:56.164175] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:40.079 [2024-12-07 00:45:56.164184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:40.079 [2024-12-07 00:45:56.172007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.172053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:40.079 [2024-12-07 00:45:56.172065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:40.079 ===================================================== 00:18:40.079 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.079 ===================================================== 00:18:40.079 Controller Capabilities/Features 00:18:40.079 ================================ 00:18:40.079 Vendor ID: 4e58 00:18:40.079 Subsystem Vendor ID: 4e58 00:18:40.079 Serial Number: SPDK2 00:18:40.079 Model Number: SPDK bdev Controller 00:18:40.079 Firmware Version: 25.01 00:18:40.079 Recommended Arb Burst: 6 00:18:40.079 IEEE OUI Identifier: 8d 6b 50 00:18:40.079 Multi-path I/O 00:18:40.079 May have multiple subsystem ports: Yes 00:18:40.079 May have multiple controllers: Yes 00:18:40.079 Associated with SR-IOV VF: No 00:18:40.079 Max Data Transfer Size: 131072 00:18:40.079 Max Number of Namespaces: 32 00:18:40.079 Max Number of I/O Queues: 127 00:18:40.079 NVMe Specification Version (VS): 1.3 00:18:40.079 NVMe Specification Version (Identify): 1.3 00:18:40.079 Maximum Queue Entries: 256 00:18:40.079 Contiguous Queues Required: Yes 00:18:40.079 Arbitration Mechanisms Supported 00:18:40.079 Weighted Round Robin: Not Supported 00:18:40.079 Vendor Specific: Not Supported 00:18:40.079 Reset Timeout: 15000 ms 00:18:40.079 Doorbell Stride: 4 bytes 00:18:40.079 NVM Subsystem Reset: Not Supported 00:18:40.079 Command Sets Supported 00:18:40.079 NVM Command Set: Supported 00:18:40.079 Boot Partition: Not Supported 00:18:40.079 Memory Page Size Minimum: 4096 bytes 00:18:40.079 Memory Page Size Maximum: 4096 bytes 00:18:40.079 Persistent Memory Region: Not Supported 00:18:40.079 Optional Asynchronous Events Supported 00:18:40.079 Namespace Attribute Notices: Supported 00:18:40.079 Firmware Activation Notices: Not Supported 00:18:40.079 ANA Change Notices: Not Supported 00:18:40.079 PLE Aggregate Log Change Notices: Not Supported 00:18:40.079 LBA Status Info Alert Notices: Not Supported 00:18:40.079 EGE Aggregate Log Change Notices: Not Supported 00:18:40.079 Normal NVM Subsystem Shutdown event: Not Supported 00:18:40.079 Zone Descriptor Change Notices: Not Supported 00:18:40.079 Discovery Log Change Notices: Not Supported 00:18:40.079 Controller Attributes 00:18:40.079 128-bit Host Identifier: Supported 00:18:40.079 Non-Operational Permissive Mode: Not Supported 00:18:40.079 NVM Sets: Not Supported 00:18:40.079 Read Recovery Levels: Not Supported 00:18:40.079 Endurance Groups: Not Supported 00:18:40.079 Predictable Latency Mode: Not Supported 00:18:40.079 Traffic Based Keep ALive: Not Supported 00:18:40.079 Namespace Granularity: Not Supported 00:18:40.079 SQ Associations: Not Supported 00:18:40.079 UUID List: Not Supported 00:18:40.079 Multi-Domain Subsystem: Not Supported 00:18:40.079 Fixed Capacity Management: Not Supported 00:18:40.079 Variable Capacity Management: Not Supported 00:18:40.079 Delete Endurance Group: Not Supported 00:18:40.079 Delete NVM Set: Not Supported 00:18:40.079 Extended LBA Formats Supported: Not Supported 00:18:40.079 Flexible Data Placement Supported: Not Supported 00:18:40.079 00:18:40.079 Controller Memory Buffer Support 00:18:40.079 ================================ 00:18:40.079 Supported: No 00:18:40.079 00:18:40.079 Persistent Memory Region Support 00:18:40.079 ================================ 00:18:40.079 Supported: No 00:18:40.079 00:18:40.079 Admin Command Set Attributes 00:18:40.079 ============================ 00:18:40.079 Security Send/Receive: Not Supported 00:18:40.079 Format NVM: Not Supported 00:18:40.079 Firmware Activate/Download: Not Supported 00:18:40.079 Namespace Management: Not Supported 00:18:40.079 Device Self-Test: Not Supported 00:18:40.079 Directives: Not Supported 00:18:40.079 NVMe-MI: Not Supported 00:18:40.079 Virtualization Management: Not Supported 00:18:40.079 Doorbell Buffer Config: Not Supported 00:18:40.079 Get LBA Status Capability: Not Supported 00:18:40.079 Command & Feature Lockdown Capability: Not Supported 00:18:40.079 Abort Command Limit: 4 00:18:40.079 Async Event Request Limit: 4 00:18:40.079 Number of Firmware Slots: N/A 00:18:40.080 Firmware Slot 1 Read-Only: N/A 00:18:40.080 Firmware Activation Without Reset: N/A 00:18:40.080 Multiple Update Detection Support: N/A 00:18:40.080 Firmware Update Granularity: No Information Provided 00:18:40.080 Per-Namespace SMART Log: No 00:18:40.080 Asymmetric Namespace Access Log Page: Not Supported 00:18:40.080 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:40.080 Command Effects Log Page: Supported 00:18:40.080 Get Log Page Extended Data: Supported 00:18:40.080 Telemetry Log Pages: Not Supported 00:18:40.080 Persistent Event Log Pages: Not Supported 00:18:40.080 Supported Log Pages Log Page: May Support 00:18:40.080 Commands Supported & Effects Log Page: Not Supported 00:18:40.080 Feature Identifiers & Effects Log Page:May Support 00:18:40.080 NVMe-MI Commands & Effects Log Page: May Support 00:18:40.080 Data Area 4 for Telemetry Log: Not Supported 00:18:40.080 Error Log Page Entries Supported: 128 00:18:40.080 Keep Alive: Supported 00:18:40.080 Keep Alive Granularity: 10000 ms 00:18:40.080 00:18:40.080 NVM Command Set Attributes 00:18:40.080 ========================== 00:18:40.080 Submission Queue Entry Size 00:18:40.080 Max: 64 00:18:40.080 Min: 64 00:18:40.080 Completion Queue Entry Size 00:18:40.080 Max: 16 00:18:40.080 Min: 16 00:18:40.080 Number of Namespaces: 32 00:18:40.080 Compare Command: Supported 00:18:40.080 Write Uncorrectable Command: Not Supported 00:18:40.080 Dataset Management Command: Supported 00:18:40.080 Write Zeroes Command: Supported 00:18:40.080 Set Features Save Field: Not Supported 00:18:40.080 Reservations: Not Supported 00:18:40.080 Timestamp: Not Supported 00:18:40.080 Copy: Supported 00:18:40.080 Volatile Write Cache: Present 00:18:40.080 Atomic Write Unit (Normal): 1 00:18:40.080 Atomic Write Unit (PFail): 1 00:18:40.080 Atomic Compare & Write Unit: 1 00:18:40.080 Fused Compare & Write: Supported 00:18:40.080 Scatter-Gather List 00:18:40.080 SGL Command Set: Supported (Dword aligned) 00:18:40.080 SGL Keyed: Not Supported 00:18:40.080 SGL Bit Bucket Descriptor: Not Supported 00:18:40.080 SGL Metadata Pointer: Not Supported 00:18:40.080 Oversized SGL: Not Supported 00:18:40.080 SGL Metadata Address: Not Supported 00:18:40.080 SGL Offset: Not Supported 00:18:40.080 Transport SGL Data Block: Not Supported 00:18:40.080 Replay Protected Memory Block: Not Supported 00:18:40.080 00:18:40.080 Firmware Slot Information 00:18:40.080 ========================= 00:18:40.080 Active slot: 1 00:18:40.080 Slot 1 Firmware Revision: 25.01 00:18:40.080 00:18:40.080 00:18:40.080 Commands Supported and Effects 00:18:40.080 ============================== 00:18:40.080 Admin Commands 00:18:40.080 -------------- 00:18:40.080 Get Log Page (02h): Supported 00:18:40.080 Identify (06h): Supported 00:18:40.080 Abort (08h): Supported 00:18:40.080 Set Features (09h): Supported 00:18:40.080 Get Features (0Ah): Supported 00:18:40.080 Asynchronous Event Request (0Ch): Supported 00:18:40.080 Keep Alive (18h): Supported 00:18:40.080 I/O Commands 00:18:40.080 ------------ 00:18:40.080 Flush (00h): Supported LBA-Change 00:18:40.080 Write (01h): Supported LBA-Change 00:18:40.080 Read (02h): Supported 00:18:40.080 Compare (05h): Supported 00:18:40.080 Write Zeroes (08h): Supported LBA-Change 00:18:40.080 Dataset Management (09h): Supported LBA-Change 00:18:40.080 Copy (19h): Supported LBA-Change 00:18:40.080 00:18:40.080 Error Log 00:18:40.080 ========= 00:18:40.080 00:18:40.080 Arbitration 00:18:40.080 =========== 00:18:40.080 Arbitration Burst: 1 00:18:40.080 00:18:40.080 Power Management 00:18:40.080 ================ 00:18:40.080 Number of Power States: 1 00:18:40.080 Current Power State: Power State #0 00:18:40.080 Power State #0: 00:18:40.080 Max Power: 0.00 W 00:18:40.080 Non-Operational State: Operational 00:18:40.080 Entry Latency: Not Reported 00:18:40.080 Exit Latency: Not Reported 00:18:40.080 Relative Read Throughput: 0 00:18:40.080 Relative Read Latency: 0 00:18:40.080 Relative Write Throughput: 0 00:18:40.080 Relative Write Latency: 0 00:18:40.080 Idle Power: Not Reported 00:18:40.080 Active Power: Not Reported 00:18:40.080 Non-Operational Permissive Mode: Not Supported 00:18:40.080 00:18:40.080 Health Information 00:18:40.080 ================== 00:18:40.080 Critical Warnings: 00:18:40.080 Available Spare Space: OK 00:18:40.080 Temperature: OK 00:18:40.080 Device Reliability: OK 00:18:40.080 Read Only: No 00:18:40.080 Volatile Memory Backup: OK 00:18:40.080 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:40.080 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:40.080 Available Spare: 0% 00:18:40.080 Available Sp[2024-12-07 00:45:56.172183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:40.080 [2024-12-07 00:45:56.180020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:40.080 [2024-12-07 00:45:56.180071] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:40.080 [2024-12-07 00:45:56.180089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.080 [2024-12-07 00:45:56.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.080 [2024-12-07 00:45:56.180110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.080 [2024-12-07 00:45:56.180120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.080 [2024-12-07 00:45:56.184007] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:40.080 [2024-12-07 00:45:56.184033] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:40.080 [2024-12-07 00:45:56.184233] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.080 [2024-12-07 00:45:56.184307] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:40.080 [2024-12-07 00:45:56.184322] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:40.080 [2024-12-07 00:45:56.185246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:40.080 [2024-12-07 00:45:56.185270] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:40.080 [2024-12-07 00:45:56.185337] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:40.080 [2024-12-07 00:45:56.186514] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:40.339 are Threshold: 0% 00:18:40.339 Life Percentage Used: 0% 00:18:40.339 Data Units Read: 0 00:18:40.339 Data Units Written: 0 00:18:40.339 Host Read Commands: 0 00:18:40.339 Host Write Commands: 0 00:18:40.339 Controller Busy Time: 0 minutes 00:18:40.339 Power Cycles: 0 00:18:40.339 Power On Hours: 0 hours 00:18:40.339 Unsafe Shutdowns: 0 00:18:40.339 Unrecoverable Media Errors: 0 00:18:40.339 Lifetime Error Log Entries: 0 00:18:40.339 Warning Temperature Time: 0 minutes 00:18:40.339 Critical Temperature Time: 0 minutes 00:18:40.339 00:18:40.339 Number of Queues 00:18:40.339 ================ 00:18:40.339 Number of I/O Submission Queues: 127 00:18:40.339 Number of I/O Completion Queues: 127 00:18:40.339 00:18:40.339 Active Namespaces 00:18:40.339 ================= 00:18:40.339 Namespace ID:1 00:18:40.339 Error Recovery Timeout: Unlimited 00:18:40.339 Command Set Identifier: NVM (00h) 00:18:40.339 Deallocate: Supported 00:18:40.339 Deallocated/Unwritten Error: Not Supported 00:18:40.339 Deallocated Read Value: Unknown 00:18:40.339 Deallocate in Write Zeroes: Not Supported 00:18:40.339 Deallocated Guard Field: 0xFFFF 00:18:40.339 Flush: Supported 00:18:40.339 Reservation: Supported 00:18:40.339 Namespace Sharing Capabilities: Multiple Controllers 00:18:40.339 Size (in LBAs): 131072 (0GiB) 00:18:40.339 Capacity (in LBAs): 131072 (0GiB) 00:18:40.339 Utilization (in LBAs): 131072 (0GiB) 00:18:40.339 NGUID: 1ECEEC4D0A8040789F3479D7B53B474C 00:18:40.339 UUID: 1eceec4d-0a80-4078-9f34-79d7b53b474c 00:18:40.339 Thin Provisioning: Not Supported 00:18:40.339 Per-NS Atomic Units: Yes 00:18:40.339 Atomic Boundary Size (Normal): 0 00:18:40.339 Atomic Boundary Size (PFail): 0 00:18:40.339 Atomic Boundary Offset: 0 00:18:40.339 Maximum Single Source Range Length: 65535 00:18:40.339 Maximum Copy Length: 65535 00:18:40.339 Maximum Source Range Count: 1 00:18:40.339 NGUID/EUI64 Never Reused: No 00:18:40.339 Namespace Write Protected: No 00:18:40.339 Number of LBA Formats: 1 00:18:40.339 Current LBA Format: LBA Format #00 00:18:40.339 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:40.339 00:18:40.339 00:45:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:40.339 [2024-12-07 00:45:56.434864] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.613 Initializing NVMe Controllers 00:18:45.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:45.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:45.613 Initialization complete. Launching workers. 00:18:45.613 ======================================================== 00:18:45.613 Latency(us) 00:18:45.613 Device Information : IOPS MiB/s Average min max 00:18:45.613 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30926.92 120.81 4138.30 1205.74 7575.36 00:18:45.613 ======================================================== 00:18:45.613 Total : 30926.92 120.81 4138.30 1205.74 7575.36 00:18:45.613 00:18:45.613 [2024-12-07 00:46:01.541423] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.613 00:46:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:45.873 [2024-12-07 00:46:01.803090] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.147 Initializing NVMe Controllers 00:18:51.147 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.147 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:51.147 Initialization complete. Launching workers. 00:18:51.147 ======================================================== 00:18:51.147 Latency(us) 00:18:51.147 Device Information : IOPS MiB/s Average min max 00:18:51.147 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29883.40 116.73 4282.43 1217.61 7673.80 00:18:51.147 ======================================================== 00:18:51.147 Total : 29883.40 116.73 4282.43 1217.61 7673.80 00:18:51.147 00:18:51.147 [2024-12-07 00:46:06.825075] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.147 00:46:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:51.147 [2024-12-07 00:46:07.059825] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.429 [2024-12-07 00:46:12.199141] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.429 Initializing NVMe Controllers 00:18:56.429 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:56.429 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:56.429 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:56.429 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:56.429 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:56.429 Initialization complete. Launching workers. 00:18:56.429 Starting thread on core 2 00:18:56.429 Starting thread on core 3 00:18:56.429 Starting thread on core 1 00:18:56.429 00:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:56.429 [2024-12-07 00:46:12.514516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.724 [2024-12-07 00:46:15.583305] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.724 Initializing NVMe Controllers 00:18:59.724 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.724 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.724 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:59.725 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:59.725 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:59.725 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:59.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:59.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:59.725 Initialization complete. Launching workers. 00:18:59.725 Starting thread on core 1 with urgent priority queue 00:18:59.725 Starting thread on core 2 with urgent priority queue 00:18:59.725 Starting thread on core 3 with urgent priority queue 00:18:59.725 Starting thread on core 0 with urgent priority queue 00:18:59.725 SPDK bdev Controller (SPDK2 ) core 0: 6413.00 IO/s 15.59 secs/100000 ios 00:18:59.725 SPDK bdev Controller (SPDK2 ) core 1: 5540.33 IO/s 18.05 secs/100000 ios 00:18:59.725 SPDK bdev Controller (SPDK2 ) core 2: 5799.00 IO/s 17.24 secs/100000 ios 00:18:59.725 SPDK bdev Controller (SPDK2 ) core 3: 5596.67 IO/s 17.87 secs/100000 ios 00:18:59.725 ======================================================== 00:18:59.725 00:18:59.725 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:59.983 [2024-12-07 00:46:15.890503] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.983 Initializing NVMe Controllers 00:18:59.983 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.983 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.983 Namespace ID: 1 size: 0GB 00:18:59.983 Initialization complete. 00:18:59.983 INFO: using host memory buffer for IO 00:18:59.983 Hello world! 00:18:59.983 [2024-12-07 00:46:15.899736] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.983 00:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:00.242 [2024-12-07 00:46:16.221433] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.179 Initializing NVMe Controllers 00:19:01.179 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.179 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.179 Initialization complete. Launching workers. 00:19:01.179 submit (in ns) avg, min, max = 7048.2, 3503.3, 4015780.0 00:19:01.179 complete (in ns) avg, min, max = 27803.7, 2080.0, 8004127.8 00:19:01.179 00:19:01.179 Submit histogram 00:19:01.179 ================ 00:19:01.179 Range in us Cumulative Count 00:19:01.179 3.484 - 3.508: 0.0407% ( 5) 00:19:01.179 3.508 - 3.532: 1.3255% ( 158) 00:19:01.179 3.532 - 3.556: 3.2691% ( 239) 00:19:01.179 3.556 - 3.579: 8.6119% ( 657) 00:19:01.179 3.579 - 3.603: 16.4512% ( 964) 00:19:01.179 3.603 - 3.627: 27.8848% ( 1406) 00:19:01.179 3.627 - 3.650: 37.4970% ( 1182) 00:19:01.179 3.650 - 3.674: 44.5393% ( 866) 00:19:01.179 3.674 - 3.698: 49.5812% ( 620) 00:19:01.179 3.698 - 3.721: 55.2086% ( 692) 00:19:01.179 3.721 - 3.745: 59.5348% ( 532) 00:19:01.179 3.745 - 3.769: 63.3000% ( 463) 00:19:01.179 3.769 - 3.793: 66.4390% ( 386) 00:19:01.179 3.793 - 3.816: 69.5048% ( 377) 00:19:01.179 3.816 - 3.840: 73.2618% ( 462) 00:19:01.179 3.840 - 3.864: 77.5230% ( 524) 00:19:01.179 3.864 - 3.887: 81.5483% ( 495) 00:19:01.179 3.887 - 3.911: 84.5247% ( 366) 00:19:01.179 3.911 - 3.935: 86.8586% ( 287) 00:19:01.179 3.935 - 3.959: 88.6476% ( 220) 00:19:01.179 3.959 - 3.982: 90.3798% ( 213) 00:19:01.179 3.982 - 4.006: 91.9899% ( 198) 00:19:01.179 4.006 - 4.030: 92.9902% ( 123) 00:19:01.179 4.030 - 4.053: 93.8115% ( 101) 00:19:01.179 4.053 - 4.077: 94.6003% ( 97) 00:19:01.179 4.077 - 4.101: 95.2915% ( 85) 00:19:01.179 4.101 - 4.124: 95.6900% ( 49) 00:19:01.179 4.124 - 4.148: 96.0641% ( 46) 00:19:01.179 4.148 - 4.172: 96.3568% ( 36) 00:19:01.179 4.172 - 4.196: 96.5032% ( 18) 00:19:01.179 4.196 - 4.219: 96.6333% ( 16) 00:19:01.179 4.219 - 4.243: 96.7390% ( 13) 00:19:01.179 4.243 - 4.267: 96.7960% ( 7) 00:19:01.179 4.267 - 4.290: 96.8773% ( 10) 00:19:01.179 4.290 - 4.314: 96.9830% ( 13) 00:19:01.179 4.314 - 4.338: 97.0155% ( 4) 00:19:01.179 4.338 - 4.361: 97.0318% ( 2) 00:19:01.179 4.361 - 4.385: 97.0562% ( 3) 00:19:01.179 4.385 - 4.409: 97.0969% ( 5) 00:19:01.179 4.409 - 4.433: 97.1375% ( 5) 00:19:01.179 4.456 - 4.480: 97.1456% ( 1) 00:19:01.179 4.480 - 4.504: 97.1538% ( 1) 00:19:01.179 4.504 - 4.527: 97.1782% ( 3) 00:19:01.179 4.527 - 4.551: 97.1863% ( 1) 00:19:01.179 4.551 - 4.575: 97.1944% ( 1) 00:19:01.179 4.575 - 4.599: 97.2026% ( 1) 00:19:01.179 4.599 - 4.622: 97.2107% ( 1) 00:19:01.179 4.646 - 4.670: 97.2188% ( 1) 00:19:01.180 4.670 - 4.693: 97.2270% ( 1) 00:19:01.180 4.693 - 4.717: 97.2351% ( 1) 00:19:01.180 4.717 - 4.741: 97.2514% ( 2) 00:19:01.180 4.741 - 4.764: 97.2595% ( 1) 00:19:01.180 4.764 - 4.788: 97.3164% ( 7) 00:19:01.180 4.788 - 4.812: 97.3489% ( 4) 00:19:01.180 4.812 - 4.836: 97.3652% ( 2) 00:19:01.180 4.836 - 4.859: 97.4140% ( 6) 00:19:01.180 4.859 - 4.883: 97.4709% ( 7) 00:19:01.180 4.883 - 4.907: 97.4953% ( 3) 00:19:01.180 4.907 - 4.930: 97.5604% ( 8) 00:19:01.180 4.930 - 4.954: 97.6173% ( 7) 00:19:01.180 4.954 - 4.978: 97.6498% ( 4) 00:19:01.180 4.978 - 5.001: 97.6905% ( 5) 00:19:01.180 5.001 - 5.025: 97.7312% ( 5) 00:19:01.180 5.025 - 5.049: 97.7474% ( 2) 00:19:01.180 5.049 - 5.073: 97.8125% ( 8) 00:19:01.180 5.073 - 5.096: 97.8531% ( 5) 00:19:01.180 5.096 - 5.120: 97.8775% ( 3) 00:19:01.180 5.120 - 5.144: 97.9345% ( 7) 00:19:01.180 5.144 - 5.167: 97.9507% ( 2) 00:19:01.180 5.167 - 5.191: 97.9589% ( 1) 00:19:01.180 5.191 - 5.215: 97.9914% ( 4) 00:19:01.180 5.215 - 5.239: 98.0076% ( 2) 00:19:01.180 5.239 - 5.262: 98.0158% ( 1) 00:19:01.180 5.286 - 5.310: 98.0402% ( 3) 00:19:01.180 5.333 - 5.357: 98.0564% ( 2) 00:19:01.180 5.381 - 5.404: 98.0727% ( 2) 00:19:01.180 5.452 - 5.476: 98.0808% ( 1) 00:19:01.180 5.547 - 5.570: 98.0890% ( 1) 00:19:01.180 5.641 - 5.665: 98.0971% ( 1) 00:19:01.180 5.665 - 5.689: 98.1052% ( 1) 00:19:01.180 5.713 - 5.736: 98.1134% ( 1) 00:19:01.180 5.736 - 5.760: 98.1215% ( 1) 00:19:01.180 5.760 - 5.784: 98.1296% ( 1) 00:19:01.180 5.807 - 5.831: 98.1378% ( 1) 00:19:01.180 5.950 - 5.973: 98.1540% ( 2) 00:19:01.180 6.163 - 6.210: 98.1703% ( 2) 00:19:01.180 6.542 - 6.590: 98.1865% ( 2) 00:19:01.180 6.732 - 6.779: 98.1947% ( 1) 00:19:01.180 6.874 - 6.921: 98.2191% ( 3) 00:19:01.180 6.969 - 7.016: 98.2272% ( 1) 00:19:01.180 7.016 - 7.064: 98.2353% ( 1) 00:19:01.180 7.064 - 7.111: 98.2516% ( 2) 00:19:01.180 7.111 - 7.159: 98.2679% ( 2) 00:19:01.180 7.159 - 7.206: 98.2841% ( 2) 00:19:01.180 7.206 - 7.253: 98.3004% ( 2) 00:19:01.180 7.301 - 7.348: 98.3085% ( 1) 00:19:01.180 7.396 - 7.443: 98.3167% ( 1) 00:19:01.180 7.443 - 7.490: 98.3329% ( 2) 00:19:01.180 7.538 - 7.585: 98.3411% ( 1) 00:19:01.180 7.680 - 7.727: 98.3492% ( 1) 00:19:01.180 7.727 - 7.775: 98.3817% ( 4) 00:19:01.180 7.822 - 7.870: 98.3899% ( 1) 00:19:01.180 7.870 - 7.917: 98.3980% ( 1) 00:19:01.180 7.917 - 7.964: 98.4224% ( 3) 00:19:01.180 7.964 - 8.012: 98.4305% ( 1) 00:19:01.180 8.012 - 8.059: 98.4468% ( 2) 00:19:01.180 8.249 - 8.296: 98.4630% ( 2) 00:19:01.180 8.296 - 8.344: 98.4712% ( 1) 00:19:01.180 8.344 - 8.391: 98.4793% ( 1) 00:19:01.180 8.391 - 8.439: 98.4874% ( 1) 00:19:01.180 8.486 - 8.533: 98.4956% ( 1) 00:19:01.180 8.533 - 8.581: 98.5037% ( 1) 00:19:01.180 8.581 - 8.628: 98.5118% ( 1) 00:19:01.180 8.770 - 8.818: 98.5200% ( 1) 00:19:01.180 9.007 - 9.055: 98.5281% ( 1) 00:19:01.180 9.055 - 9.102: 98.5362% ( 1) 00:19:01.180 9.150 - 9.197: 98.5444% ( 1) 00:19:01.180 9.292 - 9.339: 98.5606% ( 2) 00:19:01.180 9.339 - 9.387: 98.5688% ( 1) 00:19:01.180 9.671 - 9.719: 98.5769% ( 1) 00:19:01.180 9.956 - 10.003: 98.5850% ( 1) 00:19:01.180 10.050 - 10.098: 98.5932% ( 1) 00:19:01.180 10.145 - 10.193: 98.6013% ( 1) 00:19:01.180 10.572 - 10.619: 98.6094% ( 1) 00:19:01.180 10.761 - 10.809: 98.6175% ( 1) 00:19:01.180 10.856 - 10.904: 98.6338% ( 2) 00:19:01.180 11.188 - 11.236: 98.6419% ( 1) 00:19:01.180 11.330 - 11.378: 98.6501% ( 1) 00:19:01.180 11.473 - 11.520: 98.6582% ( 1) 00:19:01.180 11.615 - 11.662: 98.6663% ( 1) 00:19:01.180 12.041 - 12.089: 98.6745% ( 1) 00:19:01.180 12.136 - 12.231: 98.6826% ( 1) 00:19:01.180 12.231 - 12.326: 98.6907% ( 1) 00:19:01.180 12.326 - 12.421: 98.6989% ( 1) 00:19:01.180 12.421 - 12.516: 98.7070% ( 1) 00:19:01.180 12.516 - 12.610: 98.7233% ( 2) 00:19:01.180 12.800 - 12.895: 98.7314% ( 1) 00:19:01.180 12.990 - 13.084: 98.7477% ( 2) 00:19:01.180 13.464 - 13.559: 98.7558% ( 1) 00:19:01.180 14.033 - 14.127: 98.7639% ( 1) 00:19:01.180 14.222 - 14.317: 98.7721% ( 1) 00:19:01.180 14.412 - 14.507: 98.7802% ( 1) 00:19:01.180 14.507 - 14.601: 98.7883% ( 1) 00:19:01.180 14.601 - 14.696: 98.7965% ( 1) 00:19:01.180 14.791 - 14.886: 98.8046% ( 1) 00:19:01.180 15.076 - 15.170: 98.8127% ( 1) 00:19:01.180 15.360 - 15.455: 98.8209% ( 1) 00:19:01.180 16.972 - 17.067: 98.8290% ( 1) 00:19:01.180 17.067 - 17.161: 98.8371% ( 1) 00:19:01.180 17.161 - 17.256: 98.8452% ( 1) 00:19:01.180 17.256 - 17.351: 98.8615% ( 2) 00:19:01.180 17.351 - 17.446: 98.8940% ( 4) 00:19:01.180 17.446 - 17.541: 98.9103% ( 2) 00:19:01.180 17.541 - 17.636: 98.9835% ( 9) 00:19:01.180 17.636 - 17.730: 99.0485% ( 8) 00:19:01.180 17.730 - 17.825: 99.1461% ( 12) 00:19:01.180 17.825 - 17.920: 99.1949% ( 6) 00:19:01.180 17.920 - 18.015: 99.2519% ( 7) 00:19:01.180 18.015 - 18.110: 99.2925% ( 5) 00:19:01.180 18.110 - 18.204: 99.3413% ( 6) 00:19:01.180 18.204 - 18.299: 99.4145% ( 9) 00:19:01.180 18.299 - 18.394: 99.5202% ( 13) 00:19:01.180 18.394 - 18.489: 99.5609% ( 5) 00:19:01.180 18.489 - 18.584: 99.6259% ( 8) 00:19:01.180 18.584 - 18.679: 99.6747% ( 6) 00:19:01.180 18.679 - 18.773: 99.6910% ( 2) 00:19:01.180 18.773 - 18.868: 99.7235% ( 4) 00:19:01.180 18.963 - 19.058: 99.7560% ( 4) 00:19:01.180 19.058 - 19.153: 99.7804% ( 3) 00:19:01.180 19.247 - 19.342: 99.7886% ( 1) 00:19:01.180 19.437 - 19.532: 99.7967% ( 1) 00:19:01.180 19.532 - 19.627: 99.8048% ( 1) 00:19:01.180 19.721 - 19.816: 99.8211% ( 2) 00:19:01.180 19.816 - 19.911: 99.8292% ( 1) 00:19:01.180 20.480 - 20.575: 99.8374% ( 1) 00:19:01.180 21.713 - 21.807: 99.8455% ( 1) 00:19:01.180 24.083 - 24.178: 99.8536% ( 1) 00:19:01.181 24.273 - 24.462: 99.8618% ( 1) 00:19:01.181 25.410 - 25.600: 99.8699% ( 1) 00:19:01.181 25.979 - 26.169: 99.8780% ( 1) 00:19:01.181 26.359 - 26.548: 99.8862% ( 1) 00:19:01.181 26.738 - 26.927: 99.8943% ( 1) 00:19:01.181 27.307 - 27.496: 99.9024% ( 1) 00:19:01.181 27.686 - 27.876: 99.9105% ( 1) 00:19:01.181 29.961 - 30.151: 99.9187% ( 1) 00:19:01.181 2063.170 - 2075.307: 99.9268% ( 1) 00:19:01.181 3980.705 - 4004.978: 99.9756% ( 6) 00:19:01.181 4004.978 - 4029.250: 100.0000% ( 3) 00:19:01.181 00:19:01.181 Complete histogram 00:19:01.181 ================== 00:19:01.181 Range in us Cumulative Count 00:19:01.181 2.074 - 2.086: 0.8295% ( 102) 00:19:01.181 2.086 - 2.098: 28.5842% ( 3413) 00:19:01.181 2.098 - 2.110: 44.6044% ( 1970) 00:19:01.181 2.110 - 2.121: 47.4669% ( 352) 00:19:01.181 2.121 - 2.133: 57.2904% ( 1208) 00:19:01.181 2.133 - 2.145: 60.6408% ( 412) 00:19:01.181 2.145 - 2.157: 64.1864% ( 436) 00:19:01.181 2.157 - 2.169: 74.5304% ( 1272) 00:19:01.181 2.169 - 2.181: 77.2465% ( 334) 00:19:01.181 2.181 - 2.193: 78.9786% ( 213) 00:19:01.181 2.193 - 2.204: 81.6459% ( 328) 00:19:01.181 2.204 - 2.216: 82.5323% ( 109) 00:19:01.181 2.216 - 2.228: 83.7603% ( 151) 00:19:01.181 2.228 - 2.240: 87.2977% ( 435) 00:19:01.181 2.240 - 2.252: 89.7943% ( 307) 00:19:01.181 2.252 - 2.264: 91.7460% ( 240) 00:19:01.181 2.264 - 2.276: 92.8357% ( 134) 00:19:01.181 2.276 - 2.287: 93.1935% ( 44) 00:19:01.181 2.287 - 2.299: 93.5025% ( 38) 00:19:01.181 2.299 - 2.311: 93.7464% ( 30) 00:19:01.181 2.311 - 2.323: 94.3970% ( 80) 00:19:01.181 2.323 - 2.335: 94.9500% ( 68) 00:19:01.181 2.335 - 2.347: 95.0964% ( 18) 00:19:01.181 2.347 - 2.359: 95.1939% ( 12) 00:19:01.181 2.359 - 2.370: 95.4135% ( 27) 00:19:01.181 2.370 - 2.382: 95.6737% ( 32) 00:19:01.181 2.382 - 2.394: 96.0478% ( 46) 00:19:01.181 2.394 - 2.406: 96.5764% ( 65) 00:19:01.181 2.406 - 2.418: 97.0806% ( 62) 00:19:01.181 2.418 - 2.430: 97.3002% ( 27) 00:19:01.181 2.430 - 2.441: 97.4791% ( 22) 00:19:01.181 2.441 - 2.453: 97.6173% ( 17) 00:19:01.181 2.453 - 2.465: 97.7962% ( 22) 00:19:01.181 2.465 - 2.477: 97.8938% ( 12) 00:19:01.181 2.477 - 2.489: 97.9914% ( 12) 00:19:01.181 2.489 - 2.501: 98.0727% ( 10) 00:19:01.181 2.501 - 2.513: 98.1215% ( 6) 00:19:01.181 2.513 - 2.524: 98.1947% ( 9) 00:19:01.181 2.524 - 2.536: 98.2272% ( 4) 00:19:01.181 2.536 - 2.548: 98.2435% ( 2) 00:19:01.181 2.548 - 2.560: 98.2597% ( 2) 00:19:01.181 2.560 - 2.572: 98.2841% ( 3) 00:19:01.181 2.572 - 2.584: 98.3004% ( 2) 00:19:01.181 2.584 - 2.596: 98.3167% ( 2) 00:19:01.181 2.643 - 2.655: 98.3329% ( 2) 00:19:01.181 2.655 - 2.667: 98.3411% ( 1) 00:19:01.181 2.726 - 2.738: 98.3492% ( 1) 00:19:01.181 2.844 - 2.856: 98.3573% ( 1) 00:19:01.181 3.176 - 3.200: 98.3655% ( 1) 00:19:01.181 3.200 - 3.224: 98.3736% ( 1) 00:19:01.181 3.319 - 3.342: 98.3817% ( 1) 00:19:01.181 3.366 - 3.390: 98.3899% ( 1) 00:19:01.181 3.413 - 3.437: 98.3980% ( 1) 00:19:01.181 3.461 - 3.484: 98.4061% ( 1) 00:19:01.181 3.508 - 3.532: 98.4142% ( 1) 00:19:01.181 3.532 - 3.556: 98.4224% ( 1) 00:19:01.181 3.603 - 3.627: 98.4305% ( 1) 00:19:01.181 3.627 - 3.650: 98.4386% ( 1) 00:19:01.181 3.650 - 3.674: 98.4549% ( 2) 00:19:01.181 3.674 - 3.698: 98.4630% ( 1) 00:19:01.181 3.698 - 3.721: 98.4793% ( 2) 00:19:01.181 3.745 - 3.769: 98.5037% ( 3) 00:19:01.181 3.793 - 3.816: 98.5200% ( 2) 00:19:01.181 3.816 - 3.840: 9[2024-12-07 00:46:17.314812] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:01.440 8.5281% ( 1) 00:19:01.440 3.864 - 3.887: 98.5362% ( 1) 00:19:01.440 4.006 - 4.030: 98.5444% ( 1) 00:19:01.440 4.030 - 4.053: 98.5525% ( 1) 00:19:01.440 4.385 - 4.409: 98.5606% ( 1) 00:19:01.440 5.120 - 5.144: 98.5688% ( 1) 00:19:01.440 5.476 - 5.499: 98.5769% ( 1) 00:19:01.440 5.499 - 5.523: 98.5850% ( 1) 00:19:01.440 5.807 - 5.831: 98.5932% ( 1) 00:19:01.440 5.950 - 5.973: 98.6013% ( 1) 00:19:01.440 5.997 - 6.021: 98.6094% ( 1) 00:19:01.440 6.068 - 6.116: 98.6175% ( 1) 00:19:01.440 6.163 - 6.210: 98.6338% ( 2) 00:19:01.440 6.210 - 6.258: 98.6419% ( 1) 00:19:01.440 6.258 - 6.305: 98.6501% ( 1) 00:19:01.440 6.542 - 6.590: 98.6582% ( 1) 00:19:01.440 6.590 - 6.637: 98.6663% ( 1) 00:19:01.440 6.732 - 6.779: 98.6745% ( 1) 00:19:01.440 6.874 - 6.921: 98.6826% ( 1) 00:19:01.440 6.921 - 6.969: 98.6907% ( 1) 00:19:01.440 7.064 - 7.111: 98.6989% ( 1) 00:19:01.440 7.159 - 7.206: 98.7070% ( 1) 00:19:01.440 8.249 - 8.296: 98.7151% ( 1) 00:19:01.440 9.481 - 9.529: 98.7233% ( 1) 00:19:01.440 10.524 - 10.572: 98.7314% ( 1) 00:19:01.440 15.360 - 15.455: 98.7395% ( 1) 00:19:01.440 15.550 - 15.644: 98.7477% ( 1) 00:19:01.440 15.644 - 15.739: 98.7721% ( 3) 00:19:01.440 15.739 - 15.834: 98.7883% ( 2) 00:19:01.440 15.834 - 15.929: 98.8046% ( 2) 00:19:01.440 15.929 - 16.024: 98.8452% ( 5) 00:19:01.440 16.119 - 16.213: 98.8940% ( 6) 00:19:01.440 16.213 - 16.308: 98.9103% ( 2) 00:19:01.440 16.308 - 16.403: 98.9266% ( 2) 00:19:01.440 16.403 - 16.498: 98.9510% ( 3) 00:19:01.440 16.498 - 16.593: 99.0079% ( 7) 00:19:01.440 16.593 - 16.687: 99.0892% ( 10) 00:19:01.440 16.687 - 16.782: 99.1543% ( 8) 00:19:01.440 16.782 - 16.877: 99.1787% ( 3) 00:19:01.440 16.877 - 16.972: 99.1949% ( 2) 00:19:01.440 16.972 - 17.067: 99.2112% ( 2) 00:19:01.440 17.067 - 17.161: 99.2275% ( 2) 00:19:01.440 17.161 - 17.256: 99.2356% ( 1) 00:19:01.440 17.256 - 17.351: 99.2600% ( 3) 00:19:01.440 17.351 - 17.446: 99.2681% ( 1) 00:19:01.440 17.541 - 17.636: 99.2844% ( 2) 00:19:01.440 17.636 - 17.730: 99.3006% ( 2) 00:19:01.440 17.920 - 18.015: 99.3088% ( 1) 00:19:01.440 18.204 - 18.299: 99.3169% ( 1) 00:19:01.440 18.299 - 18.394: 99.3250% ( 1) 00:19:01.440 18.394 - 18.489: 99.3332% ( 1) 00:19:01.441 18.489 - 18.584: 99.3494% ( 2) 00:19:01.441 19.058 - 19.153: 99.3576% ( 1) 00:19:01.441 20.006 - 20.101: 99.3657% ( 1) 00:19:01.441 2002.489 - 2014.625: 99.3738% ( 1) 00:19:01.441 3980.705 - 4004.978: 99.7723% ( 49) 00:19:01.441 4004.978 - 4029.250: 99.9919% ( 27) 00:19:01.441 7961.410 - 8009.956: 100.0000% ( 1) 00:19:01.441 00:19:01.441 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:01.441 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:01.441 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:01.441 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:01.441 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:01.701 [ 00:19:01.701 { 00:19:01.701 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:01.701 "subtype": "Discovery", 00:19:01.701 "listen_addresses": [], 00:19:01.701 "allow_any_host": true, 00:19:01.701 "hosts": [] 00:19:01.701 }, 00:19:01.701 { 00:19:01.702 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:01.702 "subtype": "NVMe", 00:19:01.702 "listen_addresses": [ 00:19:01.702 { 00:19:01.702 "trtype": "VFIOUSER", 00:19:01.702 "adrfam": "IPv4", 00:19:01.702 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:01.702 "trsvcid": "0" 00:19:01.702 } 00:19:01.702 ], 00:19:01.702 "allow_any_host": true, 00:19:01.702 "hosts": [], 00:19:01.702 "serial_number": "SPDK1", 00:19:01.702 "model_number": "SPDK bdev Controller", 00:19:01.702 "max_namespaces": 32, 00:19:01.702 "min_cntlid": 1, 00:19:01.702 "max_cntlid": 65519, 00:19:01.702 "namespaces": [ 00:19:01.702 { 00:19:01.702 "nsid": 1, 00:19:01.702 "bdev_name": "Malloc1", 00:19:01.702 "name": "Malloc1", 00:19:01.702 "nguid": "7FC0233ABBDE41649A2D6F1B36B6F67E", 00:19:01.702 "uuid": "7fc0233a-bbde-4164-9a2d-6f1b36b6f67e" 00:19:01.702 }, 00:19:01.702 { 00:19:01.702 "nsid": 2, 00:19:01.702 "bdev_name": "Malloc3", 00:19:01.702 "name": "Malloc3", 00:19:01.702 "nguid": "5EDCB36F215846C99EEC9D78DAAB81B3", 00:19:01.702 "uuid": "5edcb36f-2158-46c9-9eec-9d78daab81b3" 00:19:01.702 } 00:19:01.702 ] 00:19:01.702 }, 00:19:01.702 { 00:19:01.702 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:01.702 "subtype": "NVMe", 00:19:01.702 "listen_addresses": [ 00:19:01.702 { 00:19:01.702 "trtype": "VFIOUSER", 00:19:01.702 "adrfam": "IPv4", 00:19:01.702 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:01.702 "trsvcid": "0" 00:19:01.702 } 00:19:01.702 ], 00:19:01.702 "allow_any_host": true, 00:19:01.702 "hosts": [], 00:19:01.702 "serial_number": "SPDK2", 00:19:01.702 "model_number": "SPDK bdev Controller", 00:19:01.702 "max_namespaces": 32, 00:19:01.702 "min_cntlid": 1, 00:19:01.702 "max_cntlid": 65519, 00:19:01.702 "namespaces": [ 00:19:01.702 { 00:19:01.702 "nsid": 1, 00:19:01.702 "bdev_name": "Malloc2", 00:19:01.702 "name": "Malloc2", 00:19:01.702 "nguid": "1ECEEC4D0A8040789F3479D7B53B474C", 00:19:01.702 "uuid": "1eceec4d-0a80-4078-9f34-79d7b53b474c" 00:19:01.702 } 00:19:01.702 ] 00:19:01.702 } 00:19:01.702 ] 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=244728 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:01.702 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:01.970 [2024-12-07 00:46:17.856638] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.970 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.970 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:01.970 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:01.970 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:01.970 00:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:02.228 Malloc4 00:19:02.228 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:02.486 [2024-12-07 00:46:18.536774] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.486 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:02.486 Asynchronous Event Request test 00:19:02.486 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:02.486 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:02.486 Registering asynchronous event callbacks... 00:19:02.486 Starting namespace attribute notice tests for all controllers... 00:19:02.486 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:02.486 aer_cb - Changed Namespace 00:19:02.486 Cleaning up... 00:19:02.745 [ 00:19:02.745 { 00:19:02.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:02.745 "subtype": "Discovery", 00:19:02.745 "listen_addresses": [], 00:19:02.745 "allow_any_host": true, 00:19:02.745 "hosts": [] 00:19:02.745 }, 00:19:02.745 { 00:19:02.745 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:02.745 "subtype": "NVMe", 00:19:02.745 "listen_addresses": [ 00:19:02.745 { 00:19:02.745 "trtype": "VFIOUSER", 00:19:02.745 "adrfam": "IPv4", 00:19:02.745 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:02.745 "trsvcid": "0" 00:19:02.745 } 00:19:02.745 ], 00:19:02.745 "allow_any_host": true, 00:19:02.745 "hosts": [], 00:19:02.745 "serial_number": "SPDK1", 00:19:02.745 "model_number": "SPDK bdev Controller", 00:19:02.745 "max_namespaces": 32, 00:19:02.745 "min_cntlid": 1, 00:19:02.745 "max_cntlid": 65519, 00:19:02.745 "namespaces": [ 00:19:02.745 { 00:19:02.745 "nsid": 1, 00:19:02.745 "bdev_name": "Malloc1", 00:19:02.745 "name": "Malloc1", 00:19:02.745 "nguid": "7FC0233ABBDE41649A2D6F1B36B6F67E", 00:19:02.745 "uuid": "7fc0233a-bbde-4164-9a2d-6f1b36b6f67e" 00:19:02.745 }, 00:19:02.745 { 00:19:02.745 "nsid": 2, 00:19:02.745 "bdev_name": "Malloc3", 00:19:02.745 "name": "Malloc3", 00:19:02.745 "nguid": "5EDCB36F215846C99EEC9D78DAAB81B3", 00:19:02.745 "uuid": "5edcb36f-2158-46c9-9eec-9d78daab81b3" 00:19:02.745 } 00:19:02.745 ] 00:19:02.745 }, 00:19:02.745 { 00:19:02.745 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:02.745 "subtype": "NVMe", 00:19:02.745 "listen_addresses": [ 00:19:02.745 { 00:19:02.745 "trtype": "VFIOUSER", 00:19:02.745 "adrfam": "IPv4", 00:19:02.745 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:02.745 "trsvcid": "0" 00:19:02.745 } 00:19:02.745 ], 00:19:02.745 "allow_any_host": true, 00:19:02.745 "hosts": [], 00:19:02.745 "serial_number": "SPDK2", 00:19:02.745 "model_number": "SPDK bdev Controller", 00:19:02.745 "max_namespaces": 32, 00:19:02.745 "min_cntlid": 1, 00:19:02.745 "max_cntlid": 65519, 00:19:02.745 "namespaces": [ 00:19:02.745 { 00:19:02.745 "nsid": 1, 00:19:02.745 "bdev_name": "Malloc2", 00:19:02.745 "name": "Malloc2", 00:19:02.745 "nguid": "1ECEEC4D0A8040789F3479D7B53B474C", 00:19:02.745 "uuid": "1eceec4d-0a80-4078-9f34-79d7b53b474c" 00:19:02.745 }, 00:19:02.745 { 00:19:02.745 "nsid": 2, 00:19:02.745 "bdev_name": "Malloc4", 00:19:02.745 "name": "Malloc4", 00:19:02.745 "nguid": "0BD9F0D82D6F4B1AA9C9020C6806C5B9", 00:19:02.745 "uuid": "0bd9f0d8-2d6f-4b1a-a9c9-020c6806c5b9" 00:19:02.745 } 00:19:02.745 ] 00:19:02.745 } 00:19:02.745 ] 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 244728 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 239023 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 239023 ']' 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 239023 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239023 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239023' 00:19:02.745 killing process with pid 239023 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 239023 00:19:02.745 00:46:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 239023 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=244876 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 244876' 00:19:03.316 Process pid: 244876 00:19:03.316 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 244876 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 244876 ']' 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:03.317 [2024-12-07 00:46:19.217009] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:03.317 [2024-12-07 00:46:19.218028] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:03.317 [2024-12-07 00:46:19.218096] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.317 [2024-12-07 00:46:19.283449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.317 [2024-12-07 00:46:19.325496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.317 [2024-12-07 00:46:19.325554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.317 [2024-12-07 00:46:19.325581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.317 [2024-12-07 00:46:19.325592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.317 [2024-12-07 00:46:19.325601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.317 [2024-12-07 00:46:19.327065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.317 [2024-12-07 00:46:19.327130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.317 [2024-12-07 00:46:19.327196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.317 [2024-12-07 00:46:19.327199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.317 [2024-12-07 00:46:19.407666] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:03.317 [2024-12-07 00:46:19.407865] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:03.317 [2024-12-07 00:46:19.408146] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:03.317 [2024-12-07 00:46:19.408695] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:03.317 [2024-12-07 00:46:19.408908] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:03.317 00:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:04.698 00:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:05.268 Malloc1 00:19:05.268 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:05.527 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:05.786 00:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:06.045 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:06.045 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:06.045 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:06.305 Malloc2 00:19:06.305 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:06.564 00:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:07.134 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 244876 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 244876 ']' 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 244876 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244876 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244876' 00:19:07.393 killing process with pid 244876 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 244876 00:19:07.393 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 244876 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:07.653 00:19:07.653 real 0m54.198s 00:19:07.653 user 3m29.507s 00:19:07.653 sys 0m4.070s 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:07.653 ************************************ 00:19:07.653 END TEST nvmf_vfio_user 00:19:07.653 ************************************ 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.653 ************************************ 00:19:07.653 START TEST nvmf_vfio_user_nvme_compliance 00:19:07.653 ************************************ 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:07.653 * Looking for test storage... 00:19:07.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.653 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.654 --rc genhtml_branch_coverage=1 00:19:07.654 --rc genhtml_function_coverage=1 00:19:07.654 --rc genhtml_legend=1 00:19:07.654 --rc geninfo_all_blocks=1 00:19:07.654 --rc geninfo_unexecuted_blocks=1 00:19:07.654 00:19:07.654 ' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.654 --rc genhtml_branch_coverage=1 00:19:07.654 --rc genhtml_function_coverage=1 00:19:07.654 --rc genhtml_legend=1 00:19:07.654 --rc geninfo_all_blocks=1 00:19:07.654 --rc geninfo_unexecuted_blocks=1 00:19:07.654 00:19:07.654 ' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.654 --rc genhtml_branch_coverage=1 00:19:07.654 --rc genhtml_function_coverage=1 00:19:07.654 --rc genhtml_legend=1 00:19:07.654 --rc geninfo_all_blocks=1 00:19:07.654 --rc geninfo_unexecuted_blocks=1 00:19:07.654 00:19:07.654 ' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.654 --rc genhtml_branch_coverage=1 00:19:07.654 --rc genhtml_function_coverage=1 00:19:07.654 --rc genhtml_legend=1 00:19:07.654 --rc geninfo_all_blocks=1 00:19:07.654 --rc geninfo_unexecuted_blocks=1 00:19:07.654 00:19:07.654 ' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.654 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=245488 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 245488' 00:19:07.655 Process pid: 245488 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 245488 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 245488 ']' 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.655 00:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:07.914 [2024-12-07 00:46:23.838243] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:07.914 [2024-12-07 00:46:23.838351] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.914 [2024-12-07 00:46:23.907818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.914 [2024-12-07 00:46:23.951209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.914 [2024-12-07 00:46:23.951267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.914 [2024-12-07 00:46:23.951294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.914 [2024-12-07 00:46:23.951305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.914 [2024-12-07 00:46:23.951315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.914 [2024-12-07 00:46:23.952614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.914 [2024-12-07 00:46:23.952676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.914 [2024-12-07 00:46:23.952679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.175 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.175 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:08.175 00:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:09.116 malloc0 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.116 00:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:09.376 00:19:09.376 00:19:09.376 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.376 http://cunit.sourceforge.net/ 00:19:09.376 00:19:09.376 00:19:09.376 Suite: nvme_compliance 00:19:09.376 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-07 00:46:25.319119] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.376 [2024-12-07 00:46:25.320651] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:09.376 [2024-12-07 00:46:25.320675] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:09.376 [2024-12-07 00:46:25.320702] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:09.376 [2024-12-07 00:46:25.322138] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.376 passed 00:19:09.376 Test: admin_identify_ctrlr_verify_fused ...[2024-12-07 00:46:25.407706] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.376 [2024-12-07 00:46:25.410728] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.376 passed 00:19:09.376 Test: admin_identify_ns ...[2024-12-07 00:46:25.496554] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.636 [2024-12-07 00:46:25.557016] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:09.636 [2024-12-07 00:46:25.565029] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:09.636 [2024-12-07 00:46:25.586145] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.636 passed 00:19:09.636 Test: admin_get_features_mandatory_features ...[2024-12-07 00:46:25.668663] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.636 [2024-12-07 00:46:25.671684] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.636 passed 00:19:09.636 Test: admin_get_features_optional_features ...[2024-12-07 00:46:25.755245] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.636 [2024-12-07 00:46:25.761308] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.895 passed 00:19:09.895 Test: admin_set_features_number_of_queues ...[2024-12-07 00:46:25.843428] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.895 [2024-12-07 00:46:25.948137] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.895 passed 00:19:09.895 Test: admin_get_log_page_mandatory_logs ...[2024-12-07 00:46:26.032191] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.895 [2024-12-07 00:46:26.035215] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.153 passed 00:19:10.153 Test: admin_get_log_page_with_lpo ...[2024-12-07 00:46:26.119342] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.153 [2024-12-07 00:46:26.187028] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:10.153 [2024-12-07 00:46:26.200103] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.153 passed 00:19:10.153 Test: fabric_property_get ...[2024-12-07 00:46:26.284692] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.153 [2024-12-07 00:46:26.285963] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:10.153 [2024-12-07 00:46:26.287712] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.415 passed 00:19:10.415 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-07 00:46:26.371278] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.415 [2024-12-07 00:46:26.372605] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:10.415 [2024-12-07 00:46:26.374309] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.415 passed 00:19:10.415 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-07 00:46:26.456473] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.415 [2024-12-07 00:46:26.541003] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.415 [2024-12-07 00:46:26.557020] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.415 [2024-12-07 00:46:26.562149] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.674 passed 00:19:10.674 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-07 00:46:26.642691] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.674 [2024-12-07 00:46:26.643969] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:10.674 [2024-12-07 00:46:26.647729] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.674 passed 00:19:10.674 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-07 00:46:26.731502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.674 [2024-12-07 00:46:26.806021] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:10.935 [2024-12-07 00:46:26.830004] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.935 [2024-12-07 00:46:26.835124] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.935 passed 00:19:10.935 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-07 00:46:26.918799] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.935 [2024-12-07 00:46:26.920144] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:10.935 [2024-12-07 00:46:26.920198] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:10.935 [2024-12-07 00:46:26.921827] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.935 passed 00:19:10.935 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-07 00:46:27.005967] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:11.196 [2024-12-07 00:46:27.096002] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:11.196 [2024-12-07 00:46:27.104018] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:11.196 [2024-12-07 00:46:27.112005] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:11.196 [2024-12-07 00:46:27.120001] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:11.196 [2024-12-07 00:46:27.149120] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:11.196 passed 00:19:11.196 Test: admin_create_io_sq_verify_pc ...[2024-12-07 00:46:27.235234] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:11.196 [2024-12-07 00:46:27.252019] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:11.196 [2024-12-07 00:46:27.269675] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:11.196 passed 00:19:11.456 Test: admin_create_io_qp_max_qps ...[2024-12-07 00:46:27.352257] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:12.394 [2024-12-07 00:46:28.451016] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:12.965 [2024-12-07 00:46:28.838177] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:12.965 passed 00:19:12.965 Test: admin_create_io_sq_shared_cq ...[2024-12-07 00:46:28.921424] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:12.965 [2024-12-07 00:46:29.053007] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:12.965 [2024-12-07 00:46:29.090090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.227 passed 00:19:13.227 00:19:13.227 Run Summary: Type Total Ran Passed Failed Inactive 00:19:13.227 suites 1 1 n/a 0 0 00:19:13.227 tests 18 18 18 0 0 00:19:13.227 asserts 360 360 360 0 n/a 00:19:13.227 00:19:13.227 Elapsed time = 1.562 seconds 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 245488 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 245488 ']' 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 245488 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 245488 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 245488' 00:19:13.227 killing process with pid 245488 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 245488 00:19:13.227 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 245488 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:13.488 00:19:13.488 real 0m5.790s 00:19:13.488 user 0m16.262s 00:19:13.488 sys 0m0.530s 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.488 ************************************ 00:19:13.488 END TEST nvmf_vfio_user_nvme_compliance 00:19:13.488 ************************************ 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.488 ************************************ 00:19:13.488 START TEST nvmf_vfio_user_fuzz 00:19:13.488 ************************************ 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:13.488 * Looking for test storage... 00:19:13.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.488 --rc genhtml_branch_coverage=1 00:19:13.488 --rc genhtml_function_coverage=1 00:19:13.488 --rc genhtml_legend=1 00:19:13.488 --rc geninfo_all_blocks=1 00:19:13.488 --rc geninfo_unexecuted_blocks=1 00:19:13.488 00:19:13.488 ' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.488 --rc genhtml_branch_coverage=1 00:19:13.488 --rc genhtml_function_coverage=1 00:19:13.488 --rc genhtml_legend=1 00:19:13.488 --rc geninfo_all_blocks=1 00:19:13.488 --rc geninfo_unexecuted_blocks=1 00:19:13.488 00:19:13.488 ' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.488 --rc genhtml_branch_coverage=1 00:19:13.488 --rc genhtml_function_coverage=1 00:19:13.488 --rc genhtml_legend=1 00:19:13.488 --rc geninfo_all_blocks=1 00:19:13.488 --rc geninfo_unexecuted_blocks=1 00:19:13.488 00:19:13.488 ' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.488 --rc genhtml_branch_coverage=1 00:19:13.488 --rc genhtml_function_coverage=1 00:19:13.488 --rc genhtml_legend=1 00:19:13.488 --rc geninfo_all_blocks=1 00:19:13.488 --rc geninfo_unexecuted_blocks=1 00:19:13.488 00:19:13.488 ' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.488 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=246232 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 246232' 00:19:13.489 Process pid: 246232 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 246232 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 246232 ']' 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.489 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.057 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.057 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:14.057 00:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 malloc0 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:14.999 00:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:47.074 Fuzzing completed. Shutting down the fuzz application 00:19:47.074 00:19:47.074 Dumping successful admin opcodes: 00:19:47.074 9, 10, 00:19:47.074 Dumping successful io opcodes: 00:19:47.074 0, 00:19:47.074 NS: 0x20000081ef00 I/O qp, Total commands completed: 652786, total successful commands: 2536, random_seed: 2685258432 00:19:47.074 NS: 0x20000081ef00 admin qp, Total commands completed: 132032, total successful commands: 29, random_seed: 634887488 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 246232 ']' 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 246232' 00:19:47.074 killing process with pid 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 246232 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:47.074 00:19:47.074 real 0m32.164s 00:19:47.074 user 0m29.667s 00:19:47.074 sys 0m29.545s 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.074 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.074 ************************************ 00:19:47.074 END TEST nvmf_vfio_user_fuzz 00:19:47.074 ************************************ 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.075 ************************************ 00:19:47.075 START TEST nvmf_auth_target 00:19:47.075 ************************************ 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:47.075 * Looking for test storage... 00:19:47.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:47.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.075 --rc genhtml_branch_coverage=1 00:19:47.075 --rc genhtml_function_coverage=1 00:19:47.075 --rc genhtml_legend=1 00:19:47.075 --rc geninfo_all_blocks=1 00:19:47.075 --rc geninfo_unexecuted_blocks=1 00:19:47.075 00:19:47.075 ' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:47.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.075 --rc genhtml_branch_coverage=1 00:19:47.075 --rc genhtml_function_coverage=1 00:19:47.075 --rc genhtml_legend=1 00:19:47.075 --rc geninfo_all_blocks=1 00:19:47.075 --rc geninfo_unexecuted_blocks=1 00:19:47.075 00:19:47.075 ' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:47.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.075 --rc genhtml_branch_coverage=1 00:19:47.075 --rc genhtml_function_coverage=1 00:19:47.075 --rc genhtml_legend=1 00:19:47.075 --rc geninfo_all_blocks=1 00:19:47.075 --rc geninfo_unexecuted_blocks=1 00:19:47.075 00:19:47.075 ' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:47.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.075 --rc genhtml_branch_coverage=1 00:19:47.075 --rc genhtml_function_coverage=1 00:19:47.075 --rc genhtml_legend=1 00:19:47.075 --rc geninfo_all_blocks=1 00:19:47.075 --rc geninfo_unexecuted_blocks=1 00:19:47.075 00:19:47.075 ' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.075 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.076 00:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.015 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:48.016 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:48.016 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:48.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:48.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.016 00:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:19:48.016 00:19:48.016 --- 10.0.0.2 ping statistics --- 00:19:48.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.016 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:19:48.016 00:19:48.016 --- 10.0.0.1 ping statistics --- 00:19:48.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.016 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.016 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=251676 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 251676 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 251676 ']' 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.017 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=251702 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=731b3d75b56aa4a802cd9db780b4cffe714edcc75fa05ed0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UAn 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 731b3d75b56aa4a802cd9db780b4cffe714edcc75fa05ed0 0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 731b3d75b56aa4a802cd9db780b4cffe714edcc75fa05ed0 0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=731b3d75b56aa4a802cd9db780b4cffe714edcc75fa05ed0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:48.276 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UAn 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UAn 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.UAn 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b81b2c732b9b47f1221f59e1663e56705d18dcc3b82b4bccd3cedd514aae7a2d 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.U5g 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b81b2c732b9b47f1221f59e1663e56705d18dcc3b82b4bccd3cedd514aae7a2d 3 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b81b2c732b9b47f1221f59e1663e56705d18dcc3b82b4bccd3cedd514aae7a2d 3 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b81b2c732b9b47f1221f59e1663e56705d18dcc3b82b4bccd3cedd514aae7a2d 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.U5g 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.U5g 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.U5g 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2853bbe66e81a531ad01d205442e950e 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pGQ 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2853bbe66e81a531ad01d205442e950e 1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2853bbe66e81a531ad01d205442e950e 1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2853bbe66e81a531ad01d205442e950e 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pGQ 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pGQ 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.pGQ 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e4b8cfc2a8ccd5742b5f4d74bf031764cc620eefb8b1ce6 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8BU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e4b8cfc2a8ccd5742b5f4d74bf031764cc620eefb8b1ce6 2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e4b8cfc2a8ccd5742b5f4d74bf031764cc620eefb8b1ce6 2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e4b8cfc2a8ccd5742b5f4d74bf031764cc620eefb8b1ce6 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8BU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8BU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.8BU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c8b0022138bfd4874f0e371ebde99adad0f48ea1e16c43c 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AaU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c8b0022138bfd4874f0e371ebde99adad0f48ea1e16c43c 2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c8b0022138bfd4874f0e371ebde99adad0f48ea1e16c43c 2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c8b0022138bfd4874f0e371ebde99adad0f48ea1e16c43c 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AaU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AaU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AaU 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2ea201f8f9c74a6a750efc2995ce36c 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:48.536 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MN8 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2ea201f8f9c74a6a750efc2995ce36c 1 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2ea201f8f9c74a6a750efc2995ce36c 1 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2ea201f8f9c74a6a750efc2995ce36c 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MN8 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MN8 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MN8 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c6a163d2b384d28cf5516b3928c5b3e0b61a6720c341d6f4382f7d9f50c2c2cc 00:19:48.537 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:48.795 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Rg0 00:19:48.795 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c6a163d2b384d28cf5516b3928c5b3e0b61a6720c341d6f4382f7d9f50c2c2cc 3 00:19:48.795 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c6a163d2b384d28cf5516b3928c5b3e0b61a6720c341d6f4382f7d9f50c2c2cc 3 00:19:48.795 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c6a163d2b384d28cf5516b3928c5b3e0b61a6720c341d6f4382f7d9f50c2c2cc 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Rg0 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Rg0 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Rg0 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 251676 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 251676 ']' 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.796 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.055 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.055 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.055 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 251702 /var/tmp/host.sock 00:19:49.055 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 251702 ']' 00:19:49.055 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:49.056 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.056 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:49.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:49.056 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.056 00:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UAn 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UAn 00:19:49.314 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UAn 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.U5g ]] 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U5g 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U5g 00:19:49.572 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U5g 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pGQ 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pGQ 00:19:49.829 00:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pGQ 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.8BU ]] 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8BU 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8BU 00:19:50.086 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8BU 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AaU 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AaU 00:19:50.343 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AaU 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MN8 ]] 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN8 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN8 00:19:50.601 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN8 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rg0 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Rg0 00:19:50.860 00:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Rg0 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.118 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.377 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.948 00:19:51.948 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.948 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.948 00:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.207 { 00:19:52.207 "cntlid": 1, 00:19:52.207 "qid": 0, 00:19:52.207 "state": "enabled", 00:19:52.207 "thread": "nvmf_tgt_poll_group_000", 00:19:52.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.207 "listen_address": { 00:19:52.207 "trtype": "TCP", 00:19:52.207 "adrfam": "IPv4", 00:19:52.207 "traddr": "10.0.0.2", 00:19:52.207 "trsvcid": "4420" 00:19:52.207 }, 00:19:52.207 "peer_address": { 00:19:52.207 "trtype": "TCP", 00:19:52.207 "adrfam": "IPv4", 00:19:52.207 "traddr": "10.0.0.1", 00:19:52.207 "trsvcid": "53654" 00:19:52.207 }, 00:19:52.207 "auth": { 00:19:52.207 "state": "completed", 00:19:52.207 "digest": "sha256", 00:19:52.207 "dhgroup": "null" 00:19:52.207 } 00:19:52.207 } 00:19:52.207 ]' 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.207 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.466 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:19:52.466 00:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.738 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.738 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.997 { 00:19:57.997 "cntlid": 3, 00:19:57.997 "qid": 0, 00:19:57.997 "state": "enabled", 00:19:57.997 "thread": "nvmf_tgt_poll_group_000", 00:19:57.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.997 "listen_address": { 00:19:57.997 "trtype": "TCP", 00:19:57.997 "adrfam": "IPv4", 00:19:57.997 "traddr": "10.0.0.2", 00:19:57.997 "trsvcid": "4420" 00:19:57.997 }, 00:19:57.997 "peer_address": { 00:19:57.997 "trtype": "TCP", 00:19:57.997 "adrfam": "IPv4", 00:19:57.997 "traddr": "10.0.0.1", 00:19:57.997 "trsvcid": "60636" 00:19:57.997 }, 00:19:57.997 "auth": { 00:19:57.997 "state": "completed", 00:19:57.997 "digest": "sha256", 00:19:57.997 "dhgroup": "null" 00:19:57.997 } 00:19:57.997 } 00:19:57.997 ]' 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.997 00:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.997 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:57.997 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.997 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.997 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.997 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.256 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:19:58.256 00:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:59.191 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.449 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.707 00:19:59.707 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.707 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.707 00:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.277 { 00:20:00.277 "cntlid": 5, 00:20:00.277 "qid": 0, 00:20:00.277 "state": "enabled", 00:20:00.277 "thread": "nvmf_tgt_poll_group_000", 00:20:00.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.277 "listen_address": { 00:20:00.277 "trtype": "TCP", 00:20:00.277 "adrfam": "IPv4", 00:20:00.277 "traddr": "10.0.0.2", 00:20:00.277 "trsvcid": "4420" 00:20:00.277 }, 00:20:00.277 "peer_address": { 00:20:00.277 "trtype": "TCP", 00:20:00.277 "adrfam": "IPv4", 00:20:00.277 "traddr": "10.0.0.1", 00:20:00.277 "trsvcid": "60678" 00:20:00.277 }, 00:20:00.277 "auth": { 00:20:00.277 "state": "completed", 00:20:00.277 "digest": "sha256", 00:20:00.277 "dhgroup": "null" 00:20:00.277 } 00:20:00.277 } 00:20:00.277 ]' 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.277 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.534 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:00.534 00:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:01.471 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:01.728 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:01.728 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.728 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.728 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.728 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.729 00:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.986 00:20:01.986 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.986 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.986 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.246 { 00:20:02.246 "cntlid": 7, 00:20:02.246 "qid": 0, 00:20:02.246 "state": "enabled", 00:20:02.246 "thread": "nvmf_tgt_poll_group_000", 00:20:02.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.246 "listen_address": { 00:20:02.246 "trtype": "TCP", 00:20:02.246 "adrfam": "IPv4", 00:20:02.246 "traddr": "10.0.0.2", 00:20:02.246 "trsvcid": "4420" 00:20:02.246 }, 00:20:02.246 "peer_address": { 00:20:02.246 "trtype": "TCP", 00:20:02.246 "adrfam": "IPv4", 00:20:02.246 "traddr": "10.0.0.1", 00:20:02.246 "trsvcid": "60706" 00:20:02.246 }, 00:20:02.246 "auth": { 00:20:02.246 "state": "completed", 00:20:02.246 "digest": "sha256", 00:20:02.246 "dhgroup": "null" 00:20:02.246 } 00:20:02.246 } 00:20:02.246 ]' 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.246 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.504 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.504 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.504 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.765 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:02.765 00:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.704 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.705 00:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.271 00:20:04.271 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.271 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.271 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.530 { 00:20:04.530 "cntlid": 9, 00:20:04.530 "qid": 0, 00:20:04.530 "state": "enabled", 00:20:04.530 "thread": "nvmf_tgt_poll_group_000", 00:20:04.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.530 "listen_address": { 00:20:04.530 "trtype": "TCP", 00:20:04.530 "adrfam": "IPv4", 00:20:04.530 "traddr": "10.0.0.2", 00:20:04.530 "trsvcid": "4420" 00:20:04.530 }, 00:20:04.530 "peer_address": { 00:20:04.530 "trtype": "TCP", 00:20:04.530 "adrfam": "IPv4", 00:20:04.530 "traddr": "10.0.0.1", 00:20:04.530 "trsvcid": "42070" 00:20:04.530 }, 00:20:04.530 "auth": { 00:20:04.530 "state": "completed", 00:20:04.530 "digest": "sha256", 00:20:04.530 "dhgroup": "ffdhe2048" 00:20:04.530 } 00:20:04.530 } 00:20:04.530 ]' 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.530 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.791 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:04.791 00:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.727 00:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.985 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.245 00:20:06.245 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.245 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.245 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.505 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.764 { 00:20:06.764 "cntlid": 11, 00:20:06.764 "qid": 0, 00:20:06.764 "state": "enabled", 00:20:06.764 "thread": "nvmf_tgt_poll_group_000", 00:20:06.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.764 "listen_address": { 00:20:06.764 "trtype": "TCP", 00:20:06.764 "adrfam": "IPv4", 00:20:06.764 "traddr": "10.0.0.2", 00:20:06.764 "trsvcid": "4420" 00:20:06.764 }, 00:20:06.764 "peer_address": { 00:20:06.764 "trtype": "TCP", 00:20:06.764 "adrfam": "IPv4", 00:20:06.764 "traddr": "10.0.0.1", 00:20:06.764 "trsvcid": "42108" 00:20:06.764 }, 00:20:06.764 "auth": { 00:20:06.764 "state": "completed", 00:20:06.764 "digest": "sha256", 00:20:06.764 "dhgroup": "ffdhe2048" 00:20:06.764 } 00:20:06.764 } 00:20:06.764 ]' 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.764 00:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.023 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:07.023 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.959 00:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.217 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.218 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.218 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.218 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.218 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.218 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.477 00:20:08.477 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.477 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.477 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.736 { 00:20:08.736 "cntlid": 13, 00:20:08.736 "qid": 0, 00:20:08.736 "state": "enabled", 00:20:08.736 "thread": "nvmf_tgt_poll_group_000", 00:20:08.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.736 "listen_address": { 00:20:08.736 "trtype": "TCP", 00:20:08.736 "adrfam": "IPv4", 00:20:08.736 "traddr": "10.0.0.2", 00:20:08.736 "trsvcid": "4420" 00:20:08.736 }, 00:20:08.736 "peer_address": { 00:20:08.736 "trtype": "TCP", 00:20:08.736 "adrfam": "IPv4", 00:20:08.736 "traddr": "10.0.0.1", 00:20:08.736 "trsvcid": "42128" 00:20:08.736 }, 00:20:08.736 "auth": { 00:20:08.736 "state": "completed", 00:20:08.736 "digest": "sha256", 00:20:08.736 "dhgroup": "ffdhe2048" 00:20:08.736 } 00:20:08.736 } 00:20:08.736 ]' 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.736 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.994 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.994 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.994 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.994 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.994 00:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.252 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:09.253 00:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.194 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.452 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.712 00:20:10.712 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.712 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.712 00:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.971 { 00:20:10.971 "cntlid": 15, 00:20:10.971 "qid": 0, 00:20:10.971 "state": "enabled", 00:20:10.971 "thread": "nvmf_tgt_poll_group_000", 00:20:10.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.971 "listen_address": { 00:20:10.971 "trtype": "TCP", 00:20:10.971 "adrfam": "IPv4", 00:20:10.971 "traddr": "10.0.0.2", 00:20:10.971 "trsvcid": "4420" 00:20:10.971 }, 00:20:10.971 "peer_address": { 00:20:10.971 "trtype": "TCP", 00:20:10.971 "adrfam": "IPv4", 00:20:10.971 "traddr": "10.0.0.1", 00:20:10.971 "trsvcid": "42160" 00:20:10.971 }, 00:20:10.971 "auth": { 00:20:10.971 "state": "completed", 00:20:10.971 "digest": "sha256", 00:20:10.971 "dhgroup": "ffdhe2048" 00:20:10.971 } 00:20:10.971 } 00:20:10.971 ]' 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.971 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.230 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.230 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.230 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.230 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.230 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.488 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:11.488 00:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.430 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.689 00:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.947 00:20:12.947 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.947 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.947 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.207 { 00:20:13.207 "cntlid": 17, 00:20:13.207 "qid": 0, 00:20:13.207 "state": "enabled", 00:20:13.207 "thread": "nvmf_tgt_poll_group_000", 00:20:13.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.207 "listen_address": { 00:20:13.207 "trtype": "TCP", 00:20:13.207 "adrfam": "IPv4", 00:20:13.207 "traddr": "10.0.0.2", 00:20:13.207 "trsvcid": "4420" 00:20:13.207 }, 00:20:13.207 "peer_address": { 00:20:13.207 "trtype": "TCP", 00:20:13.207 "adrfam": "IPv4", 00:20:13.207 "traddr": "10.0.0.1", 00:20:13.207 "trsvcid": "42174" 00:20:13.207 }, 00:20:13.207 "auth": { 00:20:13.207 "state": "completed", 00:20:13.207 "digest": "sha256", 00:20:13.207 "dhgroup": "ffdhe3072" 00:20:13.207 } 00:20:13.207 } 00:20:13.207 ]' 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.207 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.465 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.466 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.466 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.466 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.466 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.724 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:13.724 00:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.663 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.922 00:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.180 00:20:15.180 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.180 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.180 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.440 { 00:20:15.440 "cntlid": 19, 00:20:15.440 "qid": 0, 00:20:15.440 "state": "enabled", 00:20:15.440 "thread": "nvmf_tgt_poll_group_000", 00:20:15.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.440 "listen_address": { 00:20:15.440 "trtype": "TCP", 00:20:15.440 "adrfam": "IPv4", 00:20:15.440 "traddr": "10.0.0.2", 00:20:15.440 "trsvcid": "4420" 00:20:15.440 }, 00:20:15.440 "peer_address": { 00:20:15.440 "trtype": "TCP", 00:20:15.440 "adrfam": "IPv4", 00:20:15.440 "traddr": "10.0.0.1", 00:20:15.440 "trsvcid": "44102" 00:20:15.440 }, 00:20:15.440 "auth": { 00:20:15.440 "state": "completed", 00:20:15.440 "digest": "sha256", 00:20:15.440 "dhgroup": "ffdhe3072" 00:20:15.440 } 00:20:15.440 } 00:20:15.440 ]' 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.440 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.699 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.699 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.699 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.957 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:15.957 00:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.894 00:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.152 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.153 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.411 00:20:17.411 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.411 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.411 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.670 { 00:20:17.670 "cntlid": 21, 00:20:17.670 "qid": 0, 00:20:17.670 "state": "enabled", 00:20:17.670 "thread": "nvmf_tgt_poll_group_000", 00:20:17.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.670 "listen_address": { 00:20:17.670 "trtype": "TCP", 00:20:17.670 "adrfam": "IPv4", 00:20:17.670 "traddr": "10.0.0.2", 00:20:17.670 "trsvcid": "4420" 00:20:17.670 }, 00:20:17.670 "peer_address": { 00:20:17.670 "trtype": "TCP", 00:20:17.670 "adrfam": "IPv4", 00:20:17.670 "traddr": "10.0.0.1", 00:20:17.670 "trsvcid": "44138" 00:20:17.670 }, 00:20:17.670 "auth": { 00:20:17.670 "state": "completed", 00:20:17.670 "digest": "sha256", 00:20:17.670 "dhgroup": "ffdhe3072" 00:20:17.670 } 00:20:17.670 } 00:20:17.670 ]' 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.670 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.929 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.929 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.929 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.929 00:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.188 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:18.188 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.129 00:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.129 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.389 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.389 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.389 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.389 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.647 00:20:19.647 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.647 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.647 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.905 { 00:20:19.905 "cntlid": 23, 00:20:19.905 "qid": 0, 00:20:19.905 "state": "enabled", 00:20:19.905 "thread": "nvmf_tgt_poll_group_000", 00:20:19.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.905 "listen_address": { 00:20:19.905 "trtype": "TCP", 00:20:19.905 "adrfam": "IPv4", 00:20:19.905 "traddr": "10.0.0.2", 00:20:19.905 "trsvcid": "4420" 00:20:19.905 }, 00:20:19.905 "peer_address": { 00:20:19.905 "trtype": "TCP", 00:20:19.905 "adrfam": "IPv4", 00:20:19.905 "traddr": "10.0.0.1", 00:20:19.905 "trsvcid": "44172" 00:20:19.905 }, 00:20:19.905 "auth": { 00:20:19.905 "state": "completed", 00:20:19.905 "digest": "sha256", 00:20:19.905 "dhgroup": "ffdhe3072" 00:20:19.905 } 00:20:19.905 } 00:20:19.905 ]' 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.905 00:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.905 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.905 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.905 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.905 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.905 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.475 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:20.475 00:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.470 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.034 00:20:22.034 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.034 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.034 00:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.292 { 00:20:22.292 "cntlid": 25, 00:20:22.292 "qid": 0, 00:20:22.292 "state": "enabled", 00:20:22.292 "thread": "nvmf_tgt_poll_group_000", 00:20:22.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.292 "listen_address": { 00:20:22.292 "trtype": "TCP", 00:20:22.292 "adrfam": "IPv4", 00:20:22.292 "traddr": "10.0.0.2", 00:20:22.292 "trsvcid": "4420" 00:20:22.292 }, 00:20:22.292 "peer_address": { 00:20:22.292 "trtype": "TCP", 00:20:22.292 "adrfam": "IPv4", 00:20:22.292 "traddr": "10.0.0.1", 00:20:22.292 "trsvcid": "44196" 00:20:22.292 }, 00:20:22.292 "auth": { 00:20:22.292 "state": "completed", 00:20:22.292 "digest": "sha256", 00:20:22.292 "dhgroup": "ffdhe4096" 00:20:22.292 } 00:20:22.292 } 00:20:22.292 ]' 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.292 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.293 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.293 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.293 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.551 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:22.551 00:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.488 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.744 00:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.310 00:20:24.310 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.310 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.310 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.568 { 00:20:24.568 "cntlid": 27, 00:20:24.568 "qid": 0, 00:20:24.568 "state": "enabled", 00:20:24.568 "thread": "nvmf_tgt_poll_group_000", 00:20:24.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.568 "listen_address": { 00:20:24.568 "trtype": "TCP", 00:20:24.568 "adrfam": "IPv4", 00:20:24.568 "traddr": "10.0.0.2", 00:20:24.568 "trsvcid": "4420" 00:20:24.568 }, 00:20:24.568 "peer_address": { 00:20:24.568 "trtype": "TCP", 00:20:24.568 "adrfam": "IPv4", 00:20:24.568 "traddr": "10.0.0.1", 00:20:24.568 "trsvcid": "52682" 00:20:24.568 }, 00:20:24.568 "auth": { 00:20:24.568 "state": "completed", 00:20:24.568 "digest": "sha256", 00:20:24.568 "dhgroup": "ffdhe4096" 00:20:24.568 } 00:20:24.568 } 00:20:24.568 ]' 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.568 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.826 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:24.826 00:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.766 00:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.024 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.282 00:20:26.540 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.540 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.540 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.798 { 00:20:26.798 "cntlid": 29, 00:20:26.798 "qid": 0, 00:20:26.798 "state": "enabled", 00:20:26.798 "thread": "nvmf_tgt_poll_group_000", 00:20:26.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.798 "listen_address": { 00:20:26.798 "trtype": "TCP", 00:20:26.798 "adrfam": "IPv4", 00:20:26.798 "traddr": "10.0.0.2", 00:20:26.798 "trsvcid": "4420" 00:20:26.798 }, 00:20:26.798 "peer_address": { 00:20:26.798 "trtype": "TCP", 00:20:26.798 "adrfam": "IPv4", 00:20:26.798 "traddr": "10.0.0.1", 00:20:26.798 "trsvcid": "52696" 00:20:26.798 }, 00:20:26.798 "auth": { 00:20:26.798 "state": "completed", 00:20:26.798 "digest": "sha256", 00:20:26.798 "dhgroup": "ffdhe4096" 00:20:26.798 } 00:20:26.798 } 00:20:26.798 ]' 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.798 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.799 00:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.058 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:27.058 00:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.997 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.256 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.826 00:20:28.826 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.826 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.826 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.085 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.085 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.085 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.085 00:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.085 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.085 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.086 { 00:20:29.086 "cntlid": 31, 00:20:29.086 "qid": 0, 00:20:29.086 "state": "enabled", 00:20:29.086 "thread": "nvmf_tgt_poll_group_000", 00:20:29.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.086 "listen_address": { 00:20:29.086 "trtype": "TCP", 00:20:29.086 "adrfam": "IPv4", 00:20:29.086 "traddr": "10.0.0.2", 00:20:29.086 "trsvcid": "4420" 00:20:29.086 }, 00:20:29.086 "peer_address": { 00:20:29.086 "trtype": "TCP", 00:20:29.086 "adrfam": "IPv4", 00:20:29.086 "traddr": "10.0.0.1", 00:20:29.086 "trsvcid": "52726" 00:20:29.086 }, 00:20:29.086 "auth": { 00:20:29.086 "state": "completed", 00:20:29.086 "digest": "sha256", 00:20:29.086 "dhgroup": "ffdhe4096" 00:20:29.086 } 00:20:29.086 } 00:20:29.086 ]' 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.086 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.344 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:29.344 00:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.282 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.541 00:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.110 00:20:31.110 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.110 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.110 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.369 { 00:20:31.369 "cntlid": 33, 00:20:31.369 "qid": 0, 00:20:31.369 "state": "enabled", 00:20:31.369 "thread": "nvmf_tgt_poll_group_000", 00:20:31.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.369 "listen_address": { 00:20:31.369 "trtype": "TCP", 00:20:31.369 "adrfam": "IPv4", 00:20:31.369 "traddr": "10.0.0.2", 00:20:31.369 "trsvcid": "4420" 00:20:31.369 }, 00:20:31.369 "peer_address": { 00:20:31.369 "trtype": "TCP", 00:20:31.369 "adrfam": "IPv4", 00:20:31.369 "traddr": "10.0.0.1", 00:20:31.369 "trsvcid": "52756" 00:20:31.369 }, 00:20:31.369 "auth": { 00:20:31.369 "state": "completed", 00:20:31.369 "digest": "sha256", 00:20:31.369 "dhgroup": "ffdhe6144" 00:20:31.369 } 00:20:31.369 } 00:20:31.369 ]' 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.369 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.629 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:31.630 00:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.568 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.826 00:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.395 00:20:33.395 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.395 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.395 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.654 { 00:20:33.654 "cntlid": 35, 00:20:33.654 "qid": 0, 00:20:33.654 "state": "enabled", 00:20:33.654 "thread": "nvmf_tgt_poll_group_000", 00:20:33.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.654 "listen_address": { 00:20:33.654 "trtype": "TCP", 00:20:33.654 "adrfam": "IPv4", 00:20:33.654 "traddr": "10.0.0.2", 00:20:33.654 "trsvcid": "4420" 00:20:33.654 }, 00:20:33.654 "peer_address": { 00:20:33.654 "trtype": "TCP", 00:20:33.654 "adrfam": "IPv4", 00:20:33.654 "traddr": "10.0.0.1", 00:20:33.654 "trsvcid": "52792" 00:20:33.654 }, 00:20:33.654 "auth": { 00:20:33.654 "state": "completed", 00:20:33.654 "digest": "sha256", 00:20:33.654 "dhgroup": "ffdhe6144" 00:20:33.654 } 00:20:33.654 } 00:20:33.654 ]' 00:20:33.654 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.913 00:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.171 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:34.171 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:35.110 00:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.110 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.369 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.941 00:20:35.941 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.941 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.941 00:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.200 { 00:20:36.200 "cntlid": 37, 00:20:36.200 "qid": 0, 00:20:36.200 "state": "enabled", 00:20:36.200 "thread": "nvmf_tgt_poll_group_000", 00:20:36.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.200 "listen_address": { 00:20:36.200 "trtype": "TCP", 00:20:36.200 "adrfam": "IPv4", 00:20:36.200 "traddr": "10.0.0.2", 00:20:36.200 "trsvcid": "4420" 00:20:36.200 }, 00:20:36.200 "peer_address": { 00:20:36.200 "trtype": "TCP", 00:20:36.200 "adrfam": "IPv4", 00:20:36.200 "traddr": "10.0.0.1", 00:20:36.200 "trsvcid": "47756" 00:20:36.200 }, 00:20:36.200 "auth": { 00:20:36.200 "state": "completed", 00:20:36.200 "digest": "sha256", 00:20:36.200 "dhgroup": "ffdhe6144" 00:20:36.200 } 00:20:36.200 } 00:20:36.200 ]' 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.200 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.459 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:36.459 00:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.399 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.658 00:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.229 00:20:38.229 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.229 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.229 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.487 { 00:20:38.487 "cntlid": 39, 00:20:38.487 "qid": 0, 00:20:38.487 "state": "enabled", 00:20:38.487 "thread": "nvmf_tgt_poll_group_000", 00:20:38.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.487 "listen_address": { 00:20:38.487 "trtype": "TCP", 00:20:38.487 "adrfam": "IPv4", 00:20:38.487 "traddr": "10.0.0.2", 00:20:38.487 "trsvcid": "4420" 00:20:38.487 }, 00:20:38.487 "peer_address": { 00:20:38.487 "trtype": "TCP", 00:20:38.487 "adrfam": "IPv4", 00:20:38.487 "traddr": "10.0.0.1", 00:20:38.487 "trsvcid": "47784" 00:20:38.487 }, 00:20:38.487 "auth": { 00:20:38.487 "state": "completed", 00:20:38.487 "digest": "sha256", 00:20:38.487 "dhgroup": "ffdhe6144" 00:20:38.487 } 00:20:38.487 } 00:20:38.487 ]' 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.487 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.746 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.746 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.746 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.005 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:39.006 00:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:39.941 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.941 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.941 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.941 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.941 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.942 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.942 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.942 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.942 00:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.942 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.880 00:20:40.880 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.880 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.880 00:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.137 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.137 { 00:20:41.137 "cntlid": 41, 00:20:41.137 "qid": 0, 00:20:41.137 "state": "enabled", 00:20:41.137 "thread": "nvmf_tgt_poll_group_000", 00:20:41.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.137 "listen_address": { 00:20:41.137 "trtype": "TCP", 00:20:41.137 "adrfam": "IPv4", 00:20:41.137 "traddr": "10.0.0.2", 00:20:41.138 "trsvcid": "4420" 00:20:41.138 }, 00:20:41.138 "peer_address": { 00:20:41.138 "trtype": "TCP", 00:20:41.138 "adrfam": "IPv4", 00:20:41.138 "traddr": "10.0.0.1", 00:20:41.138 "trsvcid": "47822" 00:20:41.138 }, 00:20:41.138 "auth": { 00:20:41.138 "state": "completed", 00:20:41.138 "digest": "sha256", 00:20:41.138 "dhgroup": "ffdhe8192" 00:20:41.138 } 00:20:41.138 } 00:20:41.138 ]' 00:20:41.138 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.138 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.138 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.138 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.138 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.395 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.395 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.395 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.654 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:41.654 00:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.587 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.845 00:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.777 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.777 { 00:20:43.777 "cntlid": 43, 00:20:43.777 "qid": 0, 00:20:43.777 "state": "enabled", 00:20:43.777 "thread": "nvmf_tgt_poll_group_000", 00:20:43.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.777 "listen_address": { 00:20:43.777 "trtype": "TCP", 00:20:43.777 "adrfam": "IPv4", 00:20:43.777 "traddr": "10.0.0.2", 00:20:43.777 "trsvcid": "4420" 00:20:43.777 }, 00:20:43.777 "peer_address": { 00:20:43.777 "trtype": "TCP", 00:20:43.777 "adrfam": "IPv4", 00:20:43.777 "traddr": "10.0.0.1", 00:20:43.777 "trsvcid": "47842" 00:20:43.777 }, 00:20:43.777 "auth": { 00:20:43.777 "state": "completed", 00:20:43.777 "digest": "sha256", 00:20:43.777 "dhgroup": "ffdhe8192" 00:20:43.777 } 00:20:43.777 } 00:20:43.777 ]' 00:20:43.777 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.034 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.034 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.034 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.034 00:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.034 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.034 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.034 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.293 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:44.293 00:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.228 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.487 00:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.424 00:20:46.424 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.424 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.424 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.682 { 00:20:46.682 "cntlid": 45, 00:20:46.682 "qid": 0, 00:20:46.682 "state": "enabled", 00:20:46.682 "thread": "nvmf_tgt_poll_group_000", 00:20:46.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.682 "listen_address": { 00:20:46.682 "trtype": "TCP", 00:20:46.682 "adrfam": "IPv4", 00:20:46.682 "traddr": "10.0.0.2", 00:20:46.682 "trsvcid": "4420" 00:20:46.682 }, 00:20:46.682 "peer_address": { 00:20:46.682 "trtype": "TCP", 00:20:46.682 "adrfam": "IPv4", 00:20:46.682 "traddr": "10.0.0.1", 00:20:46.682 "trsvcid": "55550" 00:20:46.682 }, 00:20:46.682 "auth": { 00:20:46.682 "state": "completed", 00:20:46.682 "digest": "sha256", 00:20:46.682 "dhgroup": "ffdhe8192" 00:20:46.682 } 00:20:46.682 } 00:20:46.682 ]' 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.682 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.683 00:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.943 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:46.943 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.880 00:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.139 00:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.077 00:20:49.077 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.077 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.077 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.335 { 00:20:49.335 "cntlid": 47, 00:20:49.335 "qid": 0, 00:20:49.335 "state": "enabled", 00:20:49.335 "thread": "nvmf_tgt_poll_group_000", 00:20:49.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.335 "listen_address": { 00:20:49.335 "trtype": "TCP", 00:20:49.335 "adrfam": "IPv4", 00:20:49.335 "traddr": "10.0.0.2", 00:20:49.335 "trsvcid": "4420" 00:20:49.335 }, 00:20:49.335 "peer_address": { 00:20:49.335 "trtype": "TCP", 00:20:49.335 "adrfam": "IPv4", 00:20:49.335 "traddr": "10.0.0.1", 00:20:49.335 "trsvcid": "55588" 00:20:49.335 }, 00:20:49.335 "auth": { 00:20:49.335 "state": "completed", 00:20:49.335 "digest": "sha256", 00:20:49.335 "dhgroup": "ffdhe8192" 00:20:49.335 } 00:20:49.335 } 00:20:49.335 ]' 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.335 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.593 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.593 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.593 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.593 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.593 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.852 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:49.852 00:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:50.788 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.789 00:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.048 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.308 00:20:51.308 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.308 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.308 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.879 { 00:20:51.879 "cntlid": 49, 00:20:51.879 "qid": 0, 00:20:51.879 "state": "enabled", 00:20:51.879 "thread": "nvmf_tgt_poll_group_000", 00:20:51.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.879 "listen_address": { 00:20:51.879 "trtype": "TCP", 00:20:51.879 "adrfam": "IPv4", 00:20:51.879 "traddr": "10.0.0.2", 00:20:51.879 "trsvcid": "4420" 00:20:51.879 }, 00:20:51.879 "peer_address": { 00:20:51.879 "trtype": "TCP", 00:20:51.879 "adrfam": "IPv4", 00:20:51.879 "traddr": "10.0.0.1", 00:20:51.879 "trsvcid": "55614" 00:20:51.879 }, 00:20:51.879 "auth": { 00:20:51.879 "state": "completed", 00:20:51.879 "digest": "sha384", 00:20:51.879 "dhgroup": "null" 00:20:51.879 } 00:20:51.879 } 00:20:51.879 ]' 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.879 00:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.139 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:52.139 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:20:53.075 00:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.075 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.334 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.592 00:20:53.592 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.592 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.592 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.851 { 00:20:53.851 "cntlid": 51, 00:20:53.851 "qid": 0, 00:20:53.851 "state": "enabled", 00:20:53.851 "thread": "nvmf_tgt_poll_group_000", 00:20:53.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.851 "listen_address": { 00:20:53.851 "trtype": "TCP", 00:20:53.851 "adrfam": "IPv4", 00:20:53.851 "traddr": "10.0.0.2", 00:20:53.851 "trsvcid": "4420" 00:20:53.851 }, 00:20:53.851 "peer_address": { 00:20:53.851 "trtype": "TCP", 00:20:53.851 "adrfam": "IPv4", 00:20:53.851 "traddr": "10.0.0.1", 00:20:53.851 "trsvcid": "56480" 00:20:53.851 }, 00:20:53.851 "auth": { 00:20:53.851 "state": "completed", 00:20:53.851 "digest": "sha384", 00:20:53.851 "dhgroup": "null" 00:20:53.851 } 00:20:53.851 } 00:20:53.851 ]' 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.851 00:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.110 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.110 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.110 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.370 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:54.370 00:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.310 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.569 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.828 00:20:55.828 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.828 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.828 00:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.086 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.087 { 00:20:56.087 "cntlid": 53, 00:20:56.087 "qid": 0, 00:20:56.087 "state": "enabled", 00:20:56.087 "thread": "nvmf_tgt_poll_group_000", 00:20:56.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.087 "listen_address": { 00:20:56.087 "trtype": "TCP", 00:20:56.087 "adrfam": "IPv4", 00:20:56.087 "traddr": "10.0.0.2", 00:20:56.087 "trsvcid": "4420" 00:20:56.087 }, 00:20:56.087 "peer_address": { 00:20:56.087 "trtype": "TCP", 00:20:56.087 "adrfam": "IPv4", 00:20:56.087 "traddr": "10.0.0.1", 00:20:56.087 "trsvcid": "56508" 00:20:56.087 }, 00:20:56.087 "auth": { 00:20:56.087 "state": "completed", 00:20:56.087 "digest": "sha384", 00:20:56.087 "dhgroup": "null" 00:20:56.087 } 00:20:56.087 } 00:20:56.087 ]' 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.087 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.659 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:56.659 00:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.599 00:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.170 00:20:58.170 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.170 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.170 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.430 { 00:20:58.430 "cntlid": 55, 00:20:58.430 "qid": 0, 00:20:58.430 "state": "enabled", 00:20:58.430 "thread": "nvmf_tgt_poll_group_000", 00:20:58.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.430 "listen_address": { 00:20:58.430 "trtype": "TCP", 00:20:58.430 "adrfam": "IPv4", 00:20:58.430 "traddr": "10.0.0.2", 00:20:58.430 "trsvcid": "4420" 00:20:58.430 }, 00:20:58.430 "peer_address": { 00:20:58.430 "trtype": "TCP", 00:20:58.430 "adrfam": "IPv4", 00:20:58.430 "traddr": "10.0.0.1", 00:20:58.430 "trsvcid": "56526" 00:20:58.430 }, 00:20:58.430 "auth": { 00:20:58.430 "state": "completed", 00:20:58.430 "digest": "sha384", 00:20:58.430 "dhgroup": "null" 00:20:58.430 } 00:20:58.430 } 00:20:58.430 ]' 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.430 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.689 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:58.689 00:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.624 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.881 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.882 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.882 00:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.141 00:21:00.141 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.141 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.141 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.400 { 00:21:00.400 "cntlid": 57, 00:21:00.400 "qid": 0, 00:21:00.400 "state": "enabled", 00:21:00.400 "thread": "nvmf_tgt_poll_group_000", 00:21:00.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.400 "listen_address": { 00:21:00.400 "trtype": "TCP", 00:21:00.400 "adrfam": "IPv4", 00:21:00.400 "traddr": "10.0.0.2", 00:21:00.400 "trsvcid": "4420" 00:21:00.400 }, 00:21:00.400 "peer_address": { 00:21:00.400 "trtype": "TCP", 00:21:00.400 "adrfam": "IPv4", 00:21:00.400 "traddr": "10.0.0.1", 00:21:00.400 "trsvcid": "56536" 00:21:00.400 }, 00:21:00.400 "auth": { 00:21:00.400 "state": "completed", 00:21:00.400 "digest": "sha384", 00:21:00.400 "dhgroup": "ffdhe2048" 00:21:00.400 } 00:21:00.400 } 00:21:00.400 ]' 00:21:00.400 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.658 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.916 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:00.916 00:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.851 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.852 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.852 00:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.108 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.109 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.366 00:21:02.366 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.366 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.366 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.624 { 00:21:02.624 "cntlid": 59, 00:21:02.624 "qid": 0, 00:21:02.624 "state": "enabled", 00:21:02.624 "thread": "nvmf_tgt_poll_group_000", 00:21:02.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.624 "listen_address": { 00:21:02.624 "trtype": "TCP", 00:21:02.624 "adrfam": "IPv4", 00:21:02.624 "traddr": "10.0.0.2", 00:21:02.624 "trsvcid": "4420" 00:21:02.624 }, 00:21:02.624 "peer_address": { 00:21:02.624 "trtype": "TCP", 00:21:02.624 "adrfam": "IPv4", 00:21:02.624 "traddr": "10.0.0.1", 00:21:02.624 "trsvcid": "56568" 00:21:02.624 }, 00:21:02.624 "auth": { 00:21:02.624 "state": "completed", 00:21:02.624 "digest": "sha384", 00:21:02.624 "dhgroup": "ffdhe2048" 00:21:02.624 } 00:21:02.624 } 00:21:02.624 ]' 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.624 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.881 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.881 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.881 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.881 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.881 00:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.139 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:03.139 00:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.074 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.332 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.590 00:21:04.590 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.590 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.590 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.848 { 00:21:04.848 "cntlid": 61, 00:21:04.848 "qid": 0, 00:21:04.848 "state": "enabled", 00:21:04.848 "thread": "nvmf_tgt_poll_group_000", 00:21:04.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.848 "listen_address": { 00:21:04.848 "trtype": "TCP", 00:21:04.848 "adrfam": "IPv4", 00:21:04.848 "traddr": "10.0.0.2", 00:21:04.848 "trsvcid": "4420" 00:21:04.848 }, 00:21:04.848 "peer_address": { 00:21:04.848 "trtype": "TCP", 00:21:04.848 "adrfam": "IPv4", 00:21:04.848 "traddr": "10.0.0.1", 00:21:04.848 "trsvcid": "35604" 00:21:04.848 }, 00:21:04.848 "auth": { 00:21:04.848 "state": "completed", 00:21:04.848 "digest": "sha384", 00:21:04.848 "dhgroup": "ffdhe2048" 00:21:04.848 } 00:21:04.848 } 00:21:04.848 ]' 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.848 00:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.106 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.106 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.106 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.106 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.106 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.364 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:05.364 00:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:06.304 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.564 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.823 00:21:06.823 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.823 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.823 00:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.082 { 00:21:07.082 "cntlid": 63, 00:21:07.082 "qid": 0, 00:21:07.082 "state": "enabled", 00:21:07.082 "thread": "nvmf_tgt_poll_group_000", 00:21:07.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.082 "listen_address": { 00:21:07.082 "trtype": "TCP", 00:21:07.082 "adrfam": "IPv4", 00:21:07.082 "traddr": "10.0.0.2", 00:21:07.082 "trsvcid": "4420" 00:21:07.082 }, 00:21:07.082 "peer_address": { 00:21:07.082 "trtype": "TCP", 00:21:07.082 "adrfam": "IPv4", 00:21:07.082 "traddr": "10.0.0.1", 00:21:07.082 "trsvcid": "35626" 00:21:07.082 }, 00:21:07.082 "auth": { 00:21:07.082 "state": "completed", 00:21:07.082 "digest": "sha384", 00:21:07.082 "dhgroup": "ffdhe2048" 00:21:07.082 } 00:21:07.082 } 00:21:07.082 ]' 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.082 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.343 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:07.343 00:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.281 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.541 00:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.209 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.209 { 00:21:09.209 "cntlid": 65, 00:21:09.209 "qid": 0, 00:21:09.209 "state": "enabled", 00:21:09.209 "thread": "nvmf_tgt_poll_group_000", 00:21:09.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.209 "listen_address": { 00:21:09.209 "trtype": "TCP", 00:21:09.209 "adrfam": "IPv4", 00:21:09.209 "traddr": "10.0.0.2", 00:21:09.209 "trsvcid": "4420" 00:21:09.209 }, 00:21:09.209 "peer_address": { 00:21:09.209 "trtype": "TCP", 00:21:09.209 "adrfam": "IPv4", 00:21:09.209 "traddr": "10.0.0.1", 00:21:09.209 "trsvcid": "35652" 00:21:09.209 }, 00:21:09.209 "auth": { 00:21:09.209 "state": "completed", 00:21:09.209 "digest": "sha384", 00:21:09.209 "dhgroup": "ffdhe3072" 00:21:09.209 } 00:21:09.209 } 00:21:09.209 ]' 00:21:09.209 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.489 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.780 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:09.780 00:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:10.871 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.872 00:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.210 00:21:11.210 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.210 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.210 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.484 { 00:21:11.484 "cntlid": 67, 00:21:11.484 "qid": 0, 00:21:11.484 "state": "enabled", 00:21:11.484 "thread": "nvmf_tgt_poll_group_000", 00:21:11.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.484 "listen_address": { 00:21:11.484 "trtype": "TCP", 00:21:11.484 "adrfam": "IPv4", 00:21:11.484 "traddr": "10.0.0.2", 00:21:11.484 "trsvcid": "4420" 00:21:11.484 }, 00:21:11.484 "peer_address": { 00:21:11.484 "trtype": "TCP", 00:21:11.484 "adrfam": "IPv4", 00:21:11.484 "traddr": "10.0.0.1", 00:21:11.484 "trsvcid": "35670" 00:21:11.484 }, 00:21:11.484 "auth": { 00:21:11.484 "state": "completed", 00:21:11.484 "digest": "sha384", 00:21:11.484 "dhgroup": "ffdhe3072" 00:21:11.484 } 00:21:11.484 } 00:21:11.484 ]' 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.484 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.743 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.743 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.743 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.743 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.743 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.003 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:12.003 00:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:12.940 00:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.199 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.457 00:21:13.457 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.457 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.457 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.716 { 00:21:13.716 "cntlid": 69, 00:21:13.716 "qid": 0, 00:21:13.716 "state": "enabled", 00:21:13.716 "thread": "nvmf_tgt_poll_group_000", 00:21:13.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.716 "listen_address": { 00:21:13.716 "trtype": "TCP", 00:21:13.716 "adrfam": "IPv4", 00:21:13.716 "traddr": "10.0.0.2", 00:21:13.716 "trsvcid": "4420" 00:21:13.716 }, 00:21:13.716 "peer_address": { 00:21:13.716 "trtype": "TCP", 00:21:13.716 "adrfam": "IPv4", 00:21:13.716 "traddr": "10.0.0.1", 00:21:13.716 "trsvcid": "45702" 00:21:13.716 }, 00:21:13.716 "auth": { 00:21:13.716 "state": "completed", 00:21:13.716 "digest": "sha384", 00:21:13.716 "dhgroup": "ffdhe3072" 00:21:13.716 } 00:21:13.716 } 00:21:13.716 ]' 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.716 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.976 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.976 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.976 00:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.234 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:14.234 00:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.172 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.430 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.688 00:21:15.688 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.688 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.688 00:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.947 { 00:21:15.947 "cntlid": 71, 00:21:15.947 "qid": 0, 00:21:15.947 "state": "enabled", 00:21:15.947 "thread": "nvmf_tgt_poll_group_000", 00:21:15.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.947 "listen_address": { 00:21:15.947 "trtype": "TCP", 00:21:15.947 "adrfam": "IPv4", 00:21:15.947 "traddr": "10.0.0.2", 00:21:15.947 "trsvcid": "4420" 00:21:15.947 }, 00:21:15.947 "peer_address": { 00:21:15.947 "trtype": "TCP", 00:21:15.947 "adrfam": "IPv4", 00:21:15.947 "traddr": "10.0.0.1", 00:21:15.947 "trsvcid": "45732" 00:21:15.947 }, 00:21:15.947 "auth": { 00:21:15.947 "state": "completed", 00:21:15.947 "digest": "sha384", 00:21:15.947 "dhgroup": "ffdhe3072" 00:21:15.947 } 00:21:15.947 } 00:21:15.947 ]' 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.947 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.205 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.205 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.205 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.205 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.205 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.463 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:16.463 00:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.401 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.661 00:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.921 00:21:18.180 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.180 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.180 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.439 { 00:21:18.439 "cntlid": 73, 00:21:18.439 "qid": 0, 00:21:18.439 "state": "enabled", 00:21:18.439 "thread": "nvmf_tgt_poll_group_000", 00:21:18.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.439 "listen_address": { 00:21:18.439 "trtype": "TCP", 00:21:18.439 "adrfam": "IPv4", 00:21:18.439 "traddr": "10.0.0.2", 00:21:18.439 "trsvcid": "4420" 00:21:18.439 }, 00:21:18.439 "peer_address": { 00:21:18.439 "trtype": "TCP", 00:21:18.439 "adrfam": "IPv4", 00:21:18.439 "traddr": "10.0.0.1", 00:21:18.439 "trsvcid": "45756" 00:21:18.439 }, 00:21:18.439 "auth": { 00:21:18.439 "state": "completed", 00:21:18.439 "digest": "sha384", 00:21:18.439 "dhgroup": "ffdhe4096" 00:21:18.439 } 00:21:18.439 } 00:21:18.439 ]' 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.439 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.698 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:18.698 00:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.635 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.892 00:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.459 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.459 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.717 { 00:21:20.717 "cntlid": 75, 00:21:20.717 "qid": 0, 00:21:20.717 "state": "enabled", 00:21:20.717 "thread": "nvmf_tgt_poll_group_000", 00:21:20.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.717 "listen_address": { 00:21:20.717 "trtype": "TCP", 00:21:20.717 "adrfam": "IPv4", 00:21:20.717 "traddr": "10.0.0.2", 00:21:20.717 "trsvcid": "4420" 00:21:20.717 }, 00:21:20.717 "peer_address": { 00:21:20.717 "trtype": "TCP", 00:21:20.717 "adrfam": "IPv4", 00:21:20.717 "traddr": "10.0.0.1", 00:21:20.717 "trsvcid": "45780" 00:21:20.717 }, 00:21:20.717 "auth": { 00:21:20.717 "state": "completed", 00:21:20.717 "digest": "sha384", 00:21:20.717 "dhgroup": "ffdhe4096" 00:21:20.717 } 00:21:20.717 } 00:21:20.717 ]' 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.717 00:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.975 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:20.975 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:21.910 00:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.167 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.426 00:21:22.686 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.686 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.686 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.944 { 00:21:22.944 "cntlid": 77, 00:21:22.944 "qid": 0, 00:21:22.944 "state": "enabled", 00:21:22.944 "thread": "nvmf_tgt_poll_group_000", 00:21:22.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.944 "listen_address": { 00:21:22.944 "trtype": "TCP", 00:21:22.944 "adrfam": "IPv4", 00:21:22.944 "traddr": "10.0.0.2", 00:21:22.944 "trsvcid": "4420" 00:21:22.944 }, 00:21:22.944 "peer_address": { 00:21:22.944 "trtype": "TCP", 00:21:22.944 "adrfam": "IPv4", 00:21:22.944 "traddr": "10.0.0.1", 00:21:22.944 "trsvcid": "45804" 00:21:22.944 }, 00:21:22.944 "auth": { 00:21:22.944 "state": "completed", 00:21:22.944 "digest": "sha384", 00:21:22.944 "dhgroup": "ffdhe4096" 00:21:22.944 } 00:21:22.944 } 00:21:22.944 ]' 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.944 00:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.202 00:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:23.202 00:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.136 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.393 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.959 00:21:24.959 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.959 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.959 00:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.959 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.959 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.959 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.959 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.217 { 00:21:25.217 "cntlid": 79, 00:21:25.217 "qid": 0, 00:21:25.217 "state": "enabled", 00:21:25.217 "thread": "nvmf_tgt_poll_group_000", 00:21:25.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.217 "listen_address": { 00:21:25.217 "trtype": "TCP", 00:21:25.217 "adrfam": "IPv4", 00:21:25.217 "traddr": "10.0.0.2", 00:21:25.217 "trsvcid": "4420" 00:21:25.217 }, 00:21:25.217 "peer_address": { 00:21:25.217 "trtype": "TCP", 00:21:25.217 "adrfam": "IPv4", 00:21:25.217 "traddr": "10.0.0.1", 00:21:25.217 "trsvcid": "42092" 00:21:25.217 }, 00:21:25.217 "auth": { 00:21:25.217 "state": "completed", 00:21:25.217 "digest": "sha384", 00:21:25.217 "dhgroup": "ffdhe4096" 00:21:25.217 } 00:21:25.217 } 00:21:25.217 ]' 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.217 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.476 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:25.476 00:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.412 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.670 00:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.237 00:21:27.237 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.237 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.237 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.495 { 00:21:27.495 "cntlid": 81, 00:21:27.495 "qid": 0, 00:21:27.495 "state": "enabled", 00:21:27.495 "thread": "nvmf_tgt_poll_group_000", 00:21:27.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.495 "listen_address": { 00:21:27.495 "trtype": "TCP", 00:21:27.495 "adrfam": "IPv4", 00:21:27.495 "traddr": "10.0.0.2", 00:21:27.495 "trsvcid": "4420" 00:21:27.495 }, 00:21:27.495 "peer_address": { 00:21:27.495 "trtype": "TCP", 00:21:27.495 "adrfam": "IPv4", 00:21:27.495 "traddr": "10.0.0.1", 00:21:27.495 "trsvcid": "42102" 00:21:27.495 }, 00:21:27.495 "auth": { 00:21:27.495 "state": "completed", 00:21:27.495 "digest": "sha384", 00:21:27.495 "dhgroup": "ffdhe6144" 00:21:27.495 } 00:21:27.495 } 00:21:27.495 ]' 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.495 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.753 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:27.753 00:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.690 00:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.949 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.883 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.883 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.883 { 00:21:29.883 "cntlid": 83, 00:21:29.883 "qid": 0, 00:21:29.883 "state": "enabled", 00:21:29.883 "thread": "nvmf_tgt_poll_group_000", 00:21:29.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.883 "listen_address": { 00:21:29.883 "trtype": "TCP", 00:21:29.883 "adrfam": "IPv4", 00:21:29.883 "traddr": "10.0.0.2", 00:21:29.883 "trsvcid": "4420" 00:21:29.883 }, 00:21:29.883 "peer_address": { 00:21:29.884 "trtype": "TCP", 00:21:29.884 "adrfam": "IPv4", 00:21:29.884 "traddr": "10.0.0.1", 00:21:29.884 "trsvcid": "42138" 00:21:29.884 }, 00:21:29.884 "auth": { 00:21:29.884 "state": "completed", 00:21:29.884 "digest": "sha384", 00:21:29.884 "dhgroup": "ffdhe6144" 00:21:29.884 } 00:21:29.884 } 00:21:29.884 ]' 00:21:29.884 00:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.884 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.884 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.142 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.142 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.142 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.142 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.142 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.400 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:30.400 00:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.337 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.596 00:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.166 00:21:32.166 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.166 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.166 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.425 { 00:21:32.425 "cntlid": 85, 00:21:32.425 "qid": 0, 00:21:32.425 "state": "enabled", 00:21:32.425 "thread": "nvmf_tgt_poll_group_000", 00:21:32.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.425 "listen_address": { 00:21:32.425 "trtype": "TCP", 00:21:32.425 "adrfam": "IPv4", 00:21:32.425 "traddr": "10.0.0.2", 00:21:32.425 "trsvcid": "4420" 00:21:32.425 }, 00:21:32.425 "peer_address": { 00:21:32.425 "trtype": "TCP", 00:21:32.425 "adrfam": "IPv4", 00:21:32.425 "traddr": "10.0.0.1", 00:21:32.425 "trsvcid": "42168" 00:21:32.425 }, 00:21:32.425 "auth": { 00:21:32.425 "state": "completed", 00:21:32.425 "digest": "sha384", 00:21:32.425 "dhgroup": "ffdhe6144" 00:21:32.425 } 00:21:32.425 } 00:21:32.425 ]' 00:21:32.425 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.683 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.683 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.684 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.684 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.684 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.684 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.684 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.942 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:32.942 00:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:33.881 00:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:34.139 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.140 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.708 00:21:34.708 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.708 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.708 00:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.967 { 00:21:34.967 "cntlid": 87, 00:21:34.967 "qid": 0, 00:21:34.967 "state": "enabled", 00:21:34.967 "thread": "nvmf_tgt_poll_group_000", 00:21:34.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.967 "listen_address": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "adrfam": "IPv4", 00:21:34.967 "traddr": "10.0.0.2", 00:21:34.967 "trsvcid": "4420" 00:21:34.967 }, 00:21:34.967 "peer_address": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "adrfam": "IPv4", 00:21:34.967 "traddr": "10.0.0.1", 00:21:34.967 "trsvcid": "46230" 00:21:34.967 }, 00:21:34.967 "auth": { 00:21:34.967 "state": "completed", 00:21:34.967 "digest": "sha384", 00:21:34.967 "dhgroup": "ffdhe6144" 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ]' 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.967 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.225 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.225 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.225 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.225 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.226 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.484 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:35.484 00:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.420 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.679 00:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.619 00:21:37.619 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.619 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.619 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.877 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.877 { 00:21:37.877 "cntlid": 89, 00:21:37.877 "qid": 0, 00:21:37.877 "state": "enabled", 00:21:37.877 "thread": "nvmf_tgt_poll_group_000", 00:21:37.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.877 "listen_address": { 00:21:37.877 "trtype": "TCP", 00:21:37.877 "adrfam": "IPv4", 00:21:37.877 "traddr": "10.0.0.2", 00:21:37.877 "trsvcid": "4420" 00:21:37.877 }, 00:21:37.877 "peer_address": { 00:21:37.877 "trtype": "TCP", 00:21:37.877 "adrfam": "IPv4", 00:21:37.877 "traddr": "10.0.0.1", 00:21:37.877 "trsvcid": "46268" 00:21:37.877 }, 00:21:37.877 "auth": { 00:21:37.877 "state": "completed", 00:21:37.877 "digest": "sha384", 00:21:37.877 "dhgroup": "ffdhe8192" 00:21:37.877 } 00:21:37.878 } 00:21:37.878 ]' 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.878 00:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.136 00:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:38.136 00:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.075 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.332 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:39.332 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.333 00:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.264 00:21:40.264 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.265 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.265 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.522 { 00:21:40.522 "cntlid": 91, 00:21:40.522 "qid": 0, 00:21:40.522 "state": "enabled", 00:21:40.522 "thread": "nvmf_tgt_poll_group_000", 00:21:40.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.522 "listen_address": { 00:21:40.522 "trtype": "TCP", 00:21:40.522 "adrfam": "IPv4", 00:21:40.522 "traddr": "10.0.0.2", 00:21:40.522 "trsvcid": "4420" 00:21:40.522 }, 00:21:40.522 "peer_address": { 00:21:40.522 "trtype": "TCP", 00:21:40.522 "adrfam": "IPv4", 00:21:40.522 "traddr": "10.0.0.1", 00:21:40.522 "trsvcid": "46282" 00:21:40.522 }, 00:21:40.522 "auth": { 00:21:40.522 "state": "completed", 00:21:40.522 "digest": "sha384", 00:21:40.522 "dhgroup": "ffdhe8192" 00:21:40.522 } 00:21:40.522 } 00:21:40.522 ]' 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.522 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.781 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:40.781 00:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:41.715 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.716 00:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.973 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.908 00:21:42.908 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.908 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.908 00:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.166 { 00:21:43.166 "cntlid": 93, 00:21:43.166 "qid": 0, 00:21:43.166 "state": "enabled", 00:21:43.166 "thread": "nvmf_tgt_poll_group_000", 00:21:43.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.166 "listen_address": { 00:21:43.166 "trtype": "TCP", 00:21:43.166 "adrfam": "IPv4", 00:21:43.166 "traddr": "10.0.0.2", 00:21:43.166 "trsvcid": "4420" 00:21:43.166 }, 00:21:43.166 "peer_address": { 00:21:43.166 "trtype": "TCP", 00:21:43.166 "adrfam": "IPv4", 00:21:43.166 "traddr": "10.0.0.1", 00:21:43.166 "trsvcid": "46298" 00:21:43.166 }, 00:21:43.166 "auth": { 00:21:43.166 "state": "completed", 00:21:43.166 "digest": "sha384", 00:21:43.166 "dhgroup": "ffdhe8192" 00:21:43.166 } 00:21:43.166 } 00:21:43.166 ]' 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.166 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.732 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:43.732 00:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.667 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.926 00:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.861 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.861 00:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.861 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.861 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.861 { 00:21:45.861 "cntlid": 95, 00:21:45.861 "qid": 0, 00:21:45.861 "state": "enabled", 00:21:45.861 "thread": "nvmf_tgt_poll_group_000", 00:21:45.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.861 "listen_address": { 00:21:45.861 "trtype": "TCP", 00:21:45.861 "adrfam": "IPv4", 00:21:45.861 "traddr": "10.0.0.2", 00:21:45.861 "trsvcid": "4420" 00:21:45.861 }, 00:21:45.861 "peer_address": { 00:21:45.861 "trtype": "TCP", 00:21:45.861 "adrfam": "IPv4", 00:21:45.861 "traddr": "10.0.0.1", 00:21:45.861 "trsvcid": "52410" 00:21:45.861 }, 00:21:45.861 "auth": { 00:21:45.861 "state": "completed", 00:21:45.861 "digest": "sha384", 00:21:45.861 "dhgroup": "ffdhe8192" 00:21:45.861 } 00:21:45.861 } 00:21:45.861 ]' 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.119 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.378 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:46.378 00:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.313 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.571 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.829 00:21:47.829 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.829 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.829 00:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.087 { 00:21:48.087 "cntlid": 97, 00:21:48.087 "qid": 0, 00:21:48.087 "state": "enabled", 00:21:48.087 "thread": "nvmf_tgt_poll_group_000", 00:21:48.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.087 "listen_address": { 00:21:48.087 "trtype": "TCP", 00:21:48.087 "adrfam": "IPv4", 00:21:48.087 "traddr": "10.0.0.2", 00:21:48.087 "trsvcid": "4420" 00:21:48.087 }, 00:21:48.087 "peer_address": { 00:21:48.087 "trtype": "TCP", 00:21:48.087 "adrfam": "IPv4", 00:21:48.087 "traddr": "10.0.0.1", 00:21:48.087 "trsvcid": "52446" 00:21:48.087 }, 00:21:48.087 "auth": { 00:21:48.087 "state": "completed", 00:21:48.087 "digest": "sha512", 00:21:48.087 "dhgroup": "null" 00:21:48.087 } 00:21:48.087 } 00:21:48.087 ]' 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.087 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.345 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:48.345 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.345 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.345 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.346 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.603 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:48.604 00:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:49.535 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.535 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.535 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.535 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.535 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.536 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.536 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:49.536 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.792 00:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.049 00:21:50.049 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.049 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.049 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.306 { 00:21:50.306 "cntlid": 99, 00:21:50.306 "qid": 0, 00:21:50.306 "state": "enabled", 00:21:50.306 "thread": "nvmf_tgt_poll_group_000", 00:21:50.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.306 "listen_address": { 00:21:50.306 "trtype": "TCP", 00:21:50.306 "adrfam": "IPv4", 00:21:50.306 "traddr": "10.0.0.2", 00:21:50.306 "trsvcid": "4420" 00:21:50.306 }, 00:21:50.306 "peer_address": { 00:21:50.306 "trtype": "TCP", 00:21:50.306 "adrfam": "IPv4", 00:21:50.306 "traddr": "10.0.0.1", 00:21:50.306 "trsvcid": "52456" 00:21:50.306 }, 00:21:50.306 "auth": { 00:21:50.306 "state": "completed", 00:21:50.306 "digest": "sha512", 00:21:50.306 "dhgroup": "null" 00:21:50.306 } 00:21:50.306 } 00:21:50.306 ]' 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.306 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.870 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:50.870 00:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:51.803 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.061 00:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.319 00:21:52.319 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.319 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.319 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.578 { 00:21:52.578 "cntlid": 101, 00:21:52.578 "qid": 0, 00:21:52.578 "state": "enabled", 00:21:52.578 "thread": "nvmf_tgt_poll_group_000", 00:21:52.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.578 "listen_address": { 00:21:52.578 "trtype": "TCP", 00:21:52.578 "adrfam": "IPv4", 00:21:52.578 "traddr": "10.0.0.2", 00:21:52.578 "trsvcid": "4420" 00:21:52.578 }, 00:21:52.578 "peer_address": { 00:21:52.578 "trtype": "TCP", 00:21:52.578 "adrfam": "IPv4", 00:21:52.578 "traddr": "10.0.0.1", 00:21:52.578 "trsvcid": "52496" 00:21:52.578 }, 00:21:52.578 "auth": { 00:21:52.578 "state": "completed", 00:21:52.578 "digest": "sha512", 00:21:52.578 "dhgroup": "null" 00:21:52.578 } 00:21:52.578 } 00:21:52.578 ]' 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:52.578 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.836 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.836 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.836 00:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.094 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:53.094 00:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.028 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.286 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.544 00:21:54.802 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.802 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.802 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.060 { 00:21:55.060 "cntlid": 103, 00:21:55.060 "qid": 0, 00:21:55.060 "state": "enabled", 00:21:55.060 "thread": "nvmf_tgt_poll_group_000", 00:21:55.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.060 "listen_address": { 00:21:55.060 "trtype": "TCP", 00:21:55.060 "adrfam": "IPv4", 00:21:55.060 "traddr": "10.0.0.2", 00:21:55.060 "trsvcid": "4420" 00:21:55.060 }, 00:21:55.060 "peer_address": { 00:21:55.060 "trtype": "TCP", 00:21:55.060 "adrfam": "IPv4", 00:21:55.060 "traddr": "10.0.0.1", 00:21:55.060 "trsvcid": "46106" 00:21:55.060 }, 00:21:55.060 "auth": { 00:21:55.060 "state": "completed", 00:21:55.060 "digest": "sha512", 00:21:55.060 "dhgroup": "null" 00:21:55.060 } 00:21:55.060 } 00:21:55.060 ]' 00:21:55.060 00:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.060 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.318 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:55.318 00:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.253 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.511 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.770 00:21:56.770 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.770 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.770 00:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.028 { 00:21:57.028 "cntlid": 105, 00:21:57.028 "qid": 0, 00:21:57.028 "state": "enabled", 00:21:57.028 "thread": "nvmf_tgt_poll_group_000", 00:21:57.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.028 "listen_address": { 00:21:57.028 "trtype": "TCP", 00:21:57.028 "adrfam": "IPv4", 00:21:57.028 "traddr": "10.0.0.2", 00:21:57.028 "trsvcid": "4420" 00:21:57.028 }, 00:21:57.028 "peer_address": { 00:21:57.028 "trtype": "TCP", 00:21:57.028 "adrfam": "IPv4", 00:21:57.028 "traddr": "10.0.0.1", 00:21:57.028 "trsvcid": "46128" 00:21:57.028 }, 00:21:57.028 "auth": { 00:21:57.028 "state": "completed", 00:21:57.028 "digest": "sha512", 00:21:57.028 "dhgroup": "ffdhe2048" 00:21:57.028 } 00:21:57.028 } 00:21:57.028 ]' 00:21:57.028 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.285 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.285 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.286 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.286 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.286 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.286 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.286 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.543 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:57.543 00:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:58.476 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.734 00:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.298 00:21:59.298 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.298 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.299 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.299 { 00:21:59.299 "cntlid": 107, 00:21:59.299 "qid": 0, 00:21:59.299 "state": "enabled", 00:21:59.299 "thread": "nvmf_tgt_poll_group_000", 00:21:59.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.299 "listen_address": { 00:21:59.299 "trtype": "TCP", 00:21:59.299 "adrfam": "IPv4", 00:21:59.299 "traddr": "10.0.0.2", 00:21:59.299 "trsvcid": "4420" 00:21:59.299 }, 00:21:59.299 "peer_address": { 00:21:59.299 "trtype": "TCP", 00:21:59.299 "adrfam": "IPv4", 00:21:59.299 "traddr": "10.0.0.1", 00:21:59.299 "trsvcid": "46154" 00:21:59.299 }, 00:21:59.299 "auth": { 00:21:59.299 "state": "completed", 00:21:59.299 "digest": "sha512", 00:21:59.299 "dhgroup": "ffdhe2048" 00:21:59.299 } 00:21:59.299 } 00:21:59.299 ]' 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.556 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.814 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:21:59.814 00:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.745 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.746 00:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.002 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.003 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.003 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.566 00:22:01.566 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.566 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.566 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.822 { 00:22:01.822 "cntlid": 109, 00:22:01.822 "qid": 0, 00:22:01.822 "state": "enabled", 00:22:01.822 "thread": "nvmf_tgt_poll_group_000", 00:22:01.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.822 "listen_address": { 00:22:01.822 "trtype": "TCP", 00:22:01.822 "adrfam": "IPv4", 00:22:01.822 "traddr": "10.0.0.2", 00:22:01.822 "trsvcid": "4420" 00:22:01.822 }, 00:22:01.822 "peer_address": { 00:22:01.822 "trtype": "TCP", 00:22:01.822 "adrfam": "IPv4", 00:22:01.822 "traddr": "10.0.0.1", 00:22:01.822 "trsvcid": "46188" 00:22:01.822 }, 00:22:01.822 "auth": { 00:22:01.822 "state": "completed", 00:22:01.822 "digest": "sha512", 00:22:01.822 "dhgroup": "ffdhe2048" 00:22:01.822 } 00:22:01.822 } 00:22:01.822 ]' 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.822 00:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.079 00:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:02.079 00:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:03.009 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.266 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.266 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.266 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.266 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.266 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.267 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.267 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.524 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.781 00:22:03.781 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.781 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.781 00:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.038 { 00:22:04.038 "cntlid": 111, 00:22:04.038 "qid": 0, 00:22:04.038 "state": "enabled", 00:22:04.038 "thread": "nvmf_tgt_poll_group_000", 00:22:04.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.038 "listen_address": { 00:22:04.038 "trtype": "TCP", 00:22:04.038 "adrfam": "IPv4", 00:22:04.038 "traddr": "10.0.0.2", 00:22:04.038 "trsvcid": "4420" 00:22:04.038 }, 00:22:04.038 "peer_address": { 00:22:04.038 "trtype": "TCP", 00:22:04.038 "adrfam": "IPv4", 00:22:04.038 "traddr": "10.0.0.1", 00:22:04.038 "trsvcid": "48348" 00:22:04.038 }, 00:22:04.038 "auth": { 00:22:04.038 "state": "completed", 00:22:04.038 "digest": "sha512", 00:22:04.038 "dhgroup": "ffdhe2048" 00:22:04.038 } 00:22:04.038 } 00:22:04.038 ]' 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.038 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.295 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.295 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.295 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.295 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.295 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.553 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:04.553 00:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:05.487 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.746 00:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.312 00:22:06.312 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.312 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.312 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.570 { 00:22:06.570 "cntlid": 113, 00:22:06.570 "qid": 0, 00:22:06.570 "state": "enabled", 00:22:06.570 "thread": "nvmf_tgt_poll_group_000", 00:22:06.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.570 "listen_address": { 00:22:06.570 "trtype": "TCP", 00:22:06.570 "adrfam": "IPv4", 00:22:06.570 "traddr": "10.0.0.2", 00:22:06.570 "trsvcid": "4420" 00:22:06.570 }, 00:22:06.570 "peer_address": { 00:22:06.570 "trtype": "TCP", 00:22:06.570 "adrfam": "IPv4", 00:22:06.570 "traddr": "10.0.0.1", 00:22:06.570 "trsvcid": "48366" 00:22:06.570 }, 00:22:06.570 "auth": { 00:22:06.570 "state": "completed", 00:22:06.570 "digest": "sha512", 00:22:06.570 "dhgroup": "ffdhe3072" 00:22:06.570 } 00:22:06.570 } 00:22:06.570 ]' 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.570 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.828 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:06.828 00:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:07.760 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:07.761 00:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.019 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.276 00:22:08.276 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.276 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.276 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.534 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.534 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.534 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.534 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.792 { 00:22:08.792 "cntlid": 115, 00:22:08.792 "qid": 0, 00:22:08.792 "state": "enabled", 00:22:08.792 "thread": "nvmf_tgt_poll_group_000", 00:22:08.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.792 "listen_address": { 00:22:08.792 "trtype": "TCP", 00:22:08.792 "adrfam": "IPv4", 00:22:08.792 "traddr": "10.0.0.2", 00:22:08.792 "trsvcid": "4420" 00:22:08.792 }, 00:22:08.792 "peer_address": { 00:22:08.792 "trtype": "TCP", 00:22:08.792 "adrfam": "IPv4", 00:22:08.792 "traddr": "10.0.0.1", 00:22:08.792 "trsvcid": "48394" 00:22:08.792 }, 00:22:08.792 "auth": { 00:22:08.792 "state": "completed", 00:22:08.792 "digest": "sha512", 00:22:08.792 "dhgroup": "ffdhe3072" 00:22:08.792 } 00:22:08.792 } 00:22:08.792 ]' 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.792 00:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.050 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:09.050 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:09.981 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.981 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.981 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.981 00:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.981 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.981 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.981 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.981 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.241 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.806 00:22:10.806 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.806 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.807 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.064 { 00:22:11.064 "cntlid": 117, 00:22:11.064 "qid": 0, 00:22:11.064 "state": "enabled", 00:22:11.064 "thread": "nvmf_tgt_poll_group_000", 00:22:11.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.064 "listen_address": { 00:22:11.064 "trtype": "TCP", 00:22:11.064 "adrfam": "IPv4", 00:22:11.064 "traddr": "10.0.0.2", 00:22:11.064 "trsvcid": "4420" 00:22:11.064 }, 00:22:11.064 "peer_address": { 00:22:11.064 "trtype": "TCP", 00:22:11.064 "adrfam": "IPv4", 00:22:11.064 "traddr": "10.0.0.1", 00:22:11.064 "trsvcid": "48406" 00:22:11.064 }, 00:22:11.064 "auth": { 00:22:11.064 "state": "completed", 00:22:11.064 "digest": "sha512", 00:22:11.064 "dhgroup": "ffdhe3072" 00:22:11.064 } 00:22:11.064 } 00:22:11.064 ]' 00:22:11.064 00:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.064 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.323 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:11.323 00:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.257 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.516 00:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.082 00:22:13.082 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.082 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.082 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.357 { 00:22:13.357 "cntlid": 119, 00:22:13.357 "qid": 0, 00:22:13.357 "state": "enabled", 00:22:13.357 "thread": "nvmf_tgt_poll_group_000", 00:22:13.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.357 "listen_address": { 00:22:13.357 "trtype": "TCP", 00:22:13.357 "adrfam": "IPv4", 00:22:13.357 "traddr": "10.0.0.2", 00:22:13.357 "trsvcid": "4420" 00:22:13.357 }, 00:22:13.357 "peer_address": { 00:22:13.357 "trtype": "TCP", 00:22:13.357 "adrfam": "IPv4", 00:22:13.357 "traddr": "10.0.0.1", 00:22:13.357 "trsvcid": "48438" 00:22:13.357 }, 00:22:13.357 "auth": { 00:22:13.357 "state": "completed", 00:22:13.357 "digest": "sha512", 00:22:13.357 "dhgroup": "ffdhe3072" 00:22:13.357 } 00:22:13.357 } 00:22:13.357 ]' 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.357 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.615 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:13.615 00:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.549 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.807 00:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.373 00:22:15.373 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.373 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.373 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.631 { 00:22:15.631 "cntlid": 121, 00:22:15.631 "qid": 0, 00:22:15.631 "state": "enabled", 00:22:15.631 "thread": "nvmf_tgt_poll_group_000", 00:22:15.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.631 "listen_address": { 00:22:15.631 "trtype": "TCP", 00:22:15.631 "adrfam": "IPv4", 00:22:15.631 "traddr": "10.0.0.2", 00:22:15.631 "trsvcid": "4420" 00:22:15.631 }, 00:22:15.631 "peer_address": { 00:22:15.631 "trtype": "TCP", 00:22:15.631 "adrfam": "IPv4", 00:22:15.631 "traddr": "10.0.0.1", 00:22:15.631 "trsvcid": "47232" 00:22:15.631 }, 00:22:15.631 "auth": { 00:22:15.631 "state": "completed", 00:22:15.631 "digest": "sha512", 00:22:15.631 "dhgroup": "ffdhe4096" 00:22:15.631 } 00:22:15.631 } 00:22:15.631 ]' 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.631 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.889 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:15.889 00:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.820 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.821 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.821 00:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.079 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.644 00:22:17.644 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.644 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.644 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.902 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.902 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.903 { 00:22:17.903 "cntlid": 123, 00:22:17.903 "qid": 0, 00:22:17.903 "state": "enabled", 00:22:17.903 "thread": "nvmf_tgt_poll_group_000", 00:22:17.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.903 "listen_address": { 00:22:17.903 "trtype": "TCP", 00:22:17.903 "adrfam": "IPv4", 00:22:17.903 "traddr": "10.0.0.2", 00:22:17.903 "trsvcid": "4420" 00:22:17.903 }, 00:22:17.903 "peer_address": { 00:22:17.903 "trtype": "TCP", 00:22:17.903 "adrfam": "IPv4", 00:22:17.903 "traddr": "10.0.0.1", 00:22:17.903 "trsvcid": "47250" 00:22:17.903 }, 00:22:17.903 "auth": { 00:22:17.903 "state": "completed", 00:22:17.903 "digest": "sha512", 00:22:17.903 "dhgroup": "ffdhe4096" 00:22:17.903 } 00:22:17.903 } 00:22:17.903 ]' 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:17.903 00:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.903 00:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.903 00:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.903 00:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.161 00:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:18.161 00:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.094 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.353 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.917 00:22:19.917 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.917 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.918 00:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.175 { 00:22:20.175 "cntlid": 125, 00:22:20.175 "qid": 0, 00:22:20.175 "state": "enabled", 00:22:20.175 "thread": "nvmf_tgt_poll_group_000", 00:22:20.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.175 "listen_address": { 00:22:20.175 "trtype": "TCP", 00:22:20.175 "adrfam": "IPv4", 00:22:20.175 "traddr": "10.0.0.2", 00:22:20.175 "trsvcid": "4420" 00:22:20.175 }, 00:22:20.175 "peer_address": { 00:22:20.175 "trtype": "TCP", 00:22:20.175 "adrfam": "IPv4", 00:22:20.175 "traddr": "10.0.0.1", 00:22:20.175 "trsvcid": "47278" 00:22:20.175 }, 00:22:20.175 "auth": { 00:22:20.175 "state": "completed", 00:22:20.175 "digest": "sha512", 00:22:20.175 "dhgroup": "ffdhe4096" 00:22:20.175 } 00:22:20.175 } 00:22:20.175 ]' 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.175 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.434 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:20.434 00:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.369 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.627 00:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.193 00:22:22.193 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.193 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.193 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.451 { 00:22:22.451 "cntlid": 127, 00:22:22.451 "qid": 0, 00:22:22.451 "state": "enabled", 00:22:22.451 "thread": "nvmf_tgt_poll_group_000", 00:22:22.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.451 "listen_address": { 00:22:22.451 "trtype": "TCP", 00:22:22.451 "adrfam": "IPv4", 00:22:22.451 "traddr": "10.0.0.2", 00:22:22.451 "trsvcid": "4420" 00:22:22.451 }, 00:22:22.451 "peer_address": { 00:22:22.451 "trtype": "TCP", 00:22:22.451 "adrfam": "IPv4", 00:22:22.451 "traddr": "10.0.0.1", 00:22:22.451 "trsvcid": "47310" 00:22:22.451 }, 00:22:22.451 "auth": { 00:22:22.451 "state": "completed", 00:22:22.451 "digest": "sha512", 00:22:22.451 "dhgroup": "ffdhe4096" 00:22:22.451 } 00:22:22.451 } 00:22:22.451 ]' 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.451 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.709 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:22.709 00:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.657 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.914 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.915 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.915 00:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.479 00:22:24.479 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.479 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.479 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.735 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.735 { 00:22:24.735 "cntlid": 129, 00:22:24.735 "qid": 0, 00:22:24.735 "state": "enabled", 00:22:24.735 "thread": "nvmf_tgt_poll_group_000", 00:22:24.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.735 "listen_address": { 00:22:24.735 "trtype": "TCP", 00:22:24.735 "adrfam": "IPv4", 00:22:24.735 "traddr": "10.0.0.2", 00:22:24.735 "trsvcid": "4420" 00:22:24.735 }, 00:22:24.735 "peer_address": { 00:22:24.736 "trtype": "TCP", 00:22:24.736 "adrfam": "IPv4", 00:22:24.736 "traddr": "10.0.0.1", 00:22:24.736 "trsvcid": "50840" 00:22:24.736 }, 00:22:24.736 "auth": { 00:22:24.736 "state": "completed", 00:22:24.736 "digest": "sha512", 00:22:24.736 "dhgroup": "ffdhe6144" 00:22:24.736 } 00:22:24.736 } 00:22:24.736 ]' 00:22:24.736 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.736 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.736 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.736 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.736 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.992 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.992 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.992 00:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.248 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:25.248 00:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.181 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.113 00:22:27.113 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.113 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.113 00:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.113 { 00:22:27.113 "cntlid": 131, 00:22:27.113 "qid": 0, 00:22:27.113 "state": "enabled", 00:22:27.113 "thread": "nvmf_tgt_poll_group_000", 00:22:27.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.113 "listen_address": { 00:22:27.113 "trtype": "TCP", 00:22:27.113 "adrfam": "IPv4", 00:22:27.113 "traddr": "10.0.0.2", 00:22:27.113 "trsvcid": "4420" 00:22:27.113 }, 00:22:27.113 "peer_address": { 00:22:27.113 "trtype": "TCP", 00:22:27.113 "adrfam": "IPv4", 00:22:27.113 "traddr": "10.0.0.1", 00:22:27.113 "trsvcid": "50866" 00:22:27.113 }, 00:22:27.113 "auth": { 00:22:27.113 "state": "completed", 00:22:27.113 "digest": "sha512", 00:22:27.113 "dhgroup": "ffdhe6144" 00:22:27.113 } 00:22:27.113 } 00:22:27.113 ]' 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.113 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.370 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.370 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.370 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.370 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.370 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.626 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:27.627 00:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.562 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.820 00:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.385 00:22:29.385 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.385 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.385 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.643 { 00:22:29.643 "cntlid": 133, 00:22:29.643 "qid": 0, 00:22:29.643 "state": "enabled", 00:22:29.643 "thread": "nvmf_tgt_poll_group_000", 00:22:29.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.643 "listen_address": { 00:22:29.643 "trtype": "TCP", 00:22:29.643 "adrfam": "IPv4", 00:22:29.643 "traddr": "10.0.0.2", 00:22:29.643 "trsvcid": "4420" 00:22:29.643 }, 00:22:29.643 "peer_address": { 00:22:29.643 "trtype": "TCP", 00:22:29.643 "adrfam": "IPv4", 00:22:29.643 "traddr": "10.0.0.1", 00:22:29.643 "trsvcid": "50894" 00:22:29.643 }, 00:22:29.643 "auth": { 00:22:29.643 "state": "completed", 00:22:29.643 "digest": "sha512", 00:22:29.643 "dhgroup": "ffdhe6144" 00:22:29.643 } 00:22:29.643 } 00:22:29.643 ]' 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.643 00:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.901 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:29.901 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.836 00:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.094 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.666 00:22:31.666 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.666 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.666 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.923 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.923 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.923 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.923 00:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.923 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.923 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.923 { 00:22:31.923 "cntlid": 135, 00:22:31.923 "qid": 0, 00:22:31.923 "state": "enabled", 00:22:31.923 "thread": "nvmf_tgt_poll_group_000", 00:22:31.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.923 "listen_address": { 00:22:31.923 "trtype": "TCP", 00:22:31.923 "adrfam": "IPv4", 00:22:31.923 "traddr": "10.0.0.2", 00:22:31.923 "trsvcid": "4420" 00:22:31.923 }, 00:22:31.923 "peer_address": { 00:22:31.923 "trtype": "TCP", 00:22:31.923 "adrfam": "IPv4", 00:22:31.923 "traddr": "10.0.0.1", 00:22:31.923 "trsvcid": "50924" 00:22:31.923 }, 00:22:31.923 "auth": { 00:22:31.923 "state": "completed", 00:22:31.923 "digest": "sha512", 00:22:31.923 "dhgroup": "ffdhe6144" 00:22:31.923 } 00:22:31.923 } 00:22:31.923 ]' 00:22:31.923 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.923 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.923 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.181 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.181 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.181 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.181 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.181 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.438 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:32.438 00:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.373 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.631 00:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.562 00:22:34.562 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.562 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.562 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.819 { 00:22:34.819 "cntlid": 137, 00:22:34.819 "qid": 0, 00:22:34.819 "state": "enabled", 00:22:34.819 "thread": "nvmf_tgt_poll_group_000", 00:22:34.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.819 "listen_address": { 00:22:34.819 "trtype": "TCP", 00:22:34.819 "adrfam": "IPv4", 00:22:34.819 "traddr": "10.0.0.2", 00:22:34.819 "trsvcid": "4420" 00:22:34.819 }, 00:22:34.819 "peer_address": { 00:22:34.819 "trtype": "TCP", 00:22:34.819 "adrfam": "IPv4", 00:22:34.819 "traddr": "10.0.0.1", 00:22:34.819 "trsvcid": "52862" 00:22:34.819 }, 00:22:34.819 "auth": { 00:22:34.819 "state": "completed", 00:22:34.819 "digest": "sha512", 00:22:34.819 "dhgroup": "ffdhe8192" 00:22:34.819 } 00:22:34.819 } 00:22:34.819 ]' 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.819 00:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.076 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:35.076 00:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.008 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.267 00:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.202 00:22:37.202 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.202 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.202 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.460 { 00:22:37.460 "cntlid": 139, 00:22:37.460 "qid": 0, 00:22:37.460 "state": "enabled", 00:22:37.460 "thread": "nvmf_tgt_poll_group_000", 00:22:37.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.460 "listen_address": { 00:22:37.460 "trtype": "TCP", 00:22:37.460 "adrfam": "IPv4", 00:22:37.460 "traddr": "10.0.0.2", 00:22:37.460 "trsvcid": "4420" 00:22:37.460 }, 00:22:37.460 "peer_address": { 00:22:37.460 "trtype": "TCP", 00:22:37.460 "adrfam": "IPv4", 00:22:37.460 "traddr": "10.0.0.1", 00:22:37.460 "trsvcid": "52898" 00:22:37.460 }, 00:22:37.460 "auth": { 00:22:37.460 "state": "completed", 00:22:37.460 "digest": "sha512", 00:22:37.460 "dhgroup": "ffdhe8192" 00:22:37.460 } 00:22:37.460 } 00:22:37.460 ]' 00:22:37.460 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.461 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.718 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:37.718 00:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: --dhchap-ctrl-secret DHHC-1:02:M2U0YjhjZmMyYThjY2Q1NzQyYjVmNGQ3NGJmMDMxNzY0Y2M2MjBlZWZiOGIxY2U2T7x86w==: 00:22:38.653 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:38.912 00:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.169 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.732 00:22:39.989 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.989 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.989 00:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.247 { 00:22:40.247 "cntlid": 141, 00:22:40.247 "qid": 0, 00:22:40.247 "state": "enabled", 00:22:40.247 "thread": "nvmf_tgt_poll_group_000", 00:22:40.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.247 "listen_address": { 00:22:40.247 "trtype": "TCP", 00:22:40.247 "adrfam": "IPv4", 00:22:40.247 "traddr": "10.0.0.2", 00:22:40.247 "trsvcid": "4420" 00:22:40.247 }, 00:22:40.247 "peer_address": { 00:22:40.247 "trtype": "TCP", 00:22:40.247 "adrfam": "IPv4", 00:22:40.247 "traddr": "10.0.0.1", 00:22:40.247 "trsvcid": "52920" 00:22:40.247 }, 00:22:40.247 "auth": { 00:22:40.247 "state": "completed", 00:22:40.247 "digest": "sha512", 00:22:40.247 "dhgroup": "ffdhe8192" 00:22:40.247 } 00:22:40.247 } 00:22:40.247 ]' 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.247 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.505 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:40.505 00:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:01:YjJlYTIwMWY4ZjljNzRhNmE3NTBlZmMyOTk1Y2UzNmMgmxn8: 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.438 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.696 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:41.696 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.697 00:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.631 00:22:42.631 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.631 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.631 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.889 { 00:22:42.889 "cntlid": 143, 00:22:42.889 "qid": 0, 00:22:42.889 "state": "enabled", 00:22:42.889 "thread": "nvmf_tgt_poll_group_000", 00:22:42.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:42.889 "listen_address": { 00:22:42.889 "trtype": "TCP", 00:22:42.889 "adrfam": "IPv4", 00:22:42.889 "traddr": "10.0.0.2", 00:22:42.889 "trsvcid": "4420" 00:22:42.889 }, 00:22:42.889 "peer_address": { 00:22:42.889 "trtype": "TCP", 00:22:42.889 "adrfam": "IPv4", 00:22:42.889 "traddr": "10.0.0.1", 00:22:42.889 "trsvcid": "52946" 00:22:42.889 }, 00:22:42.889 "auth": { 00:22:42.889 "state": "completed", 00:22:42.889 "digest": "sha512", 00:22:42.889 "dhgroup": "ffdhe8192" 00:22:42.889 } 00:22:42.889 } 00:22:42.889 ]' 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.889 00:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.148 00:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:43.148 00:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.082 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.340 00:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.274 00:22:45.274 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.274 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.274 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.531 { 00:22:45.531 "cntlid": 145, 00:22:45.531 "qid": 0, 00:22:45.531 "state": "enabled", 00:22:45.531 "thread": "nvmf_tgt_poll_group_000", 00:22:45.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.531 "listen_address": { 00:22:45.531 "trtype": "TCP", 00:22:45.531 "adrfam": "IPv4", 00:22:45.531 "traddr": "10.0.0.2", 00:22:45.531 "trsvcid": "4420" 00:22:45.531 }, 00:22:45.531 "peer_address": { 00:22:45.531 "trtype": "TCP", 00:22:45.531 "adrfam": "IPv4", 00:22:45.531 "traddr": "10.0.0.1", 00:22:45.531 "trsvcid": "55014" 00:22:45.531 }, 00:22:45.531 "auth": { 00:22:45.531 "state": "completed", 00:22:45.531 "digest": "sha512", 00:22:45.531 "dhgroup": "ffdhe8192" 00:22:45.531 } 00:22:45.531 } 00:22:45.531 ]' 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.531 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.792 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.792 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.792 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.792 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.792 00:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.049 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:46.049 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NzMxYjNkNzViNTZhYTRhODAyY2Q5ZGI3ODBiNGNmZmU3MTRlZGNjNzVmYTA1ZWQw7xnhKg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjJjNzMyYjliNDdmMTIyMWY1OWUxNjYzZTU2NzA1ZDE4ZGNjM2I4MmI0YmNjZDNjZWRkNTE0YWFlN2EyZF8y3sU=: 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:46.983 00:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:47.609 request: 00:22:47.609 { 00:22:47.609 "name": "nvme0", 00:22:47.609 "trtype": "tcp", 00:22:47.609 "traddr": "10.0.0.2", 00:22:47.609 "adrfam": "ipv4", 00:22:47.609 "trsvcid": "4420", 00:22:47.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:47.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.609 "prchk_reftag": false, 00:22:47.609 "prchk_guard": false, 00:22:47.609 "hdgst": false, 00:22:47.609 "ddgst": false, 00:22:47.609 "dhchap_key": "key2", 00:22:47.609 "allow_unrecognized_csi": false, 00:22:47.609 "method": "bdev_nvme_attach_controller", 00:22:47.609 "req_id": 1 00:22:47.609 } 00:22:47.609 Got JSON-RPC error response 00:22:47.609 response: 00:22:47.609 { 00:22:47.609 "code": -5, 00:22:47.609 "message": "Input/output error" 00:22:47.609 } 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:47.609 00:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:48.600 request: 00:22:48.600 { 00:22:48.600 "name": "nvme0", 00:22:48.600 "trtype": "tcp", 00:22:48.600 "traddr": "10.0.0.2", 00:22:48.600 "adrfam": "ipv4", 00:22:48.600 "trsvcid": "4420", 00:22:48.600 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:48.600 "prchk_reftag": false, 00:22:48.600 "prchk_guard": false, 00:22:48.600 "hdgst": false, 00:22:48.600 "ddgst": false, 00:22:48.600 "dhchap_key": "key1", 00:22:48.600 "dhchap_ctrlr_key": "ckey2", 00:22:48.600 "allow_unrecognized_csi": false, 00:22:48.600 "method": "bdev_nvme_attach_controller", 00:22:48.600 "req_id": 1 00:22:48.600 } 00:22:48.600 Got JSON-RPC error response 00:22:48.600 response: 00:22:48.600 { 00:22:48.600 "code": -5, 00:22:48.600 "message": "Input/output error" 00:22:48.600 } 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.600 00:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.210 request: 00:22:49.210 { 00:22:49.210 "name": "nvme0", 00:22:49.210 "trtype": "tcp", 00:22:49.210 "traddr": "10.0.0.2", 00:22:49.210 "adrfam": "ipv4", 00:22:49.210 "trsvcid": "4420", 00:22:49.210 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:49.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:49.210 "prchk_reftag": false, 00:22:49.210 "prchk_guard": false, 00:22:49.210 "hdgst": false, 00:22:49.210 "ddgst": false, 00:22:49.210 "dhchap_key": "key1", 00:22:49.210 "dhchap_ctrlr_key": "ckey1", 00:22:49.210 "allow_unrecognized_csi": false, 00:22:49.210 "method": "bdev_nvme_attach_controller", 00:22:49.210 "req_id": 1 00:22:49.210 } 00:22:49.210 Got JSON-RPC error response 00:22:49.210 response: 00:22:49.210 { 00:22:49.210 "code": -5, 00:22:49.210 "message": "Input/output error" 00:22:49.210 } 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 251676 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 251676 ']' 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 251676 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.210 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251676 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251676' 00:22:49.504 killing process with pid 251676 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 251676 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 251676 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=274764 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 274764 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 274764 ']' 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.504 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 274764 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 274764 ']' 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.761 00:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.020 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.020 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:50.020 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:50.020 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.020 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 null0 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UAn 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.U5g ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U5g 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pGQ 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.8BU ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8BU 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AaU 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MN8 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN8 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rg0 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.279 00:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.656 nvme0n1 00:22:51.656 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.656 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.656 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.913 { 00:22:51.913 "cntlid": 1, 00:22:51.913 "qid": 0, 00:22:51.913 "state": "enabled", 00:22:51.913 "thread": "nvmf_tgt_poll_group_000", 00:22:51.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.913 "listen_address": { 00:22:51.913 "trtype": "TCP", 00:22:51.913 "adrfam": "IPv4", 00:22:51.913 "traddr": "10.0.0.2", 00:22:51.913 "trsvcid": "4420" 00:22:51.913 }, 00:22:51.913 "peer_address": { 00:22:51.913 "trtype": "TCP", 00:22:51.913 "adrfam": "IPv4", 00:22:51.913 "traddr": "10.0.0.1", 00:22:51.913 "trsvcid": "55088" 00:22:51.913 }, 00:22:51.913 "auth": { 00:22:51.913 "state": "completed", 00:22:51.913 "digest": "sha512", 00:22:51.913 "dhgroup": "ffdhe8192" 00:22:51.913 } 00:22:51.913 } 00:22:51.913 ]' 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.913 00:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.913 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:51.913 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.171 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.171 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.171 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.430 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:52.430 00:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.365 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.932 request: 00:22:53.932 { 00:22:53.932 "name": "nvme0", 00:22:53.932 "trtype": "tcp", 00:22:53.932 "traddr": "10.0.0.2", 00:22:53.932 "adrfam": "ipv4", 00:22:53.932 "trsvcid": "4420", 00:22:53.932 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:53.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:53.932 "prchk_reftag": false, 00:22:53.932 "prchk_guard": false, 00:22:53.932 "hdgst": false, 00:22:53.932 "ddgst": false, 00:22:53.932 "dhchap_key": "key3", 00:22:53.932 "allow_unrecognized_csi": false, 00:22:53.932 "method": "bdev_nvme_attach_controller", 00:22:53.932 "req_id": 1 00:22:53.932 } 00:22:53.932 Got JSON-RPC error response 00:22:53.932 response: 00:22:53.932 { 00:22:53.932 "code": -5, 00:22:53.932 "message": "Input/output error" 00:22:53.932 } 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:53.932 00:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.932 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.501 request: 00:22:54.501 { 00:22:54.501 "name": "nvme0", 00:22:54.501 "trtype": "tcp", 00:22:54.501 "traddr": "10.0.0.2", 00:22:54.501 "adrfam": "ipv4", 00:22:54.501 "trsvcid": "4420", 00:22:54.501 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:54.501 "prchk_reftag": false, 00:22:54.501 "prchk_guard": false, 00:22:54.501 "hdgst": false, 00:22:54.501 "ddgst": false, 00:22:54.501 "dhchap_key": "key3", 00:22:54.501 "allow_unrecognized_csi": false, 00:22:54.501 "method": "bdev_nvme_attach_controller", 00:22:54.501 "req_id": 1 00:22:54.501 } 00:22:54.501 Got JSON-RPC error response 00:22:54.501 response: 00:22:54.501 { 00:22:54.501 "code": -5, 00:22:54.501 "message": "Input/output error" 00:22:54.501 } 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.501 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.760 00:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:55.328 request: 00:22:55.328 { 00:22:55.328 "name": "nvme0", 00:22:55.328 "trtype": "tcp", 00:22:55.328 "traddr": "10.0.0.2", 00:22:55.328 "adrfam": "ipv4", 00:22:55.328 "trsvcid": "4420", 00:22:55.328 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:55.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:55.328 "prchk_reftag": false, 00:22:55.328 "prchk_guard": false, 00:22:55.328 "hdgst": false, 00:22:55.328 "ddgst": false, 00:22:55.328 "dhchap_key": "key0", 00:22:55.328 "dhchap_ctrlr_key": "key1", 00:22:55.328 "allow_unrecognized_csi": false, 00:22:55.328 "method": "bdev_nvme_attach_controller", 00:22:55.328 "req_id": 1 00:22:55.328 } 00:22:55.328 Got JSON-RPC error response 00:22:55.328 response: 00:22:55.328 { 00:22:55.328 "code": -5, 00:22:55.328 "message": "Input/output error" 00:22:55.328 } 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:55.328 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:55.586 nvme0n1 00:22:55.586 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:55.586 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:55.586 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.844 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.844 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.844 00:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:56.102 00:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:57.485 nvme0n1 00:22:57.485 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:57.485 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:57.485 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.744 00:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:58.003 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.003 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:58.003 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: --dhchap-ctrl-secret DHHC-1:03:YzZhMTYzZDJiMzg0ZDI4Y2Y1NTE2YjM5MjhjNWIzZTBiNjFhNjcyMGMzNDFkNmY0MzgyZjdkOWY1MGMyYzJjYwIk8b0=: 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.941 00:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:59.199 00:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:00.137 request: 00:23:00.137 { 00:23:00.137 "name": "nvme0", 00:23:00.137 "trtype": "tcp", 00:23:00.137 "traddr": "10.0.0.2", 00:23:00.137 "adrfam": "ipv4", 00:23:00.137 "trsvcid": "4420", 00:23:00.137 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:00.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:00.137 "prchk_reftag": false, 00:23:00.137 "prchk_guard": false, 00:23:00.137 "hdgst": false, 00:23:00.137 "ddgst": false, 00:23:00.137 "dhchap_key": "key1", 00:23:00.137 "allow_unrecognized_csi": false, 00:23:00.137 "method": "bdev_nvme_attach_controller", 00:23:00.137 "req_id": 1 00:23:00.137 } 00:23:00.137 Got JSON-RPC error response 00:23:00.137 response: 00:23:00.137 { 00:23:00.137 "code": -5, 00:23:00.137 "message": "Input/output error" 00:23:00.137 } 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:00.137 00:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.518 nvme0n1 00:23:01.518 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:01.518 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:01.518 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.777 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.777 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.777 00:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:02.036 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:02.296 nvme0n1 00:23:02.296 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:02.296 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:02.296 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.555 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.555 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.555 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: '' 2s 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: ]] 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mjg1M2JiZTY2ZTgxYTUzMWFkMDFkMjA1NDQyZTk1MGVREEXO: 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:03.125 00:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:05.031 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:05.031 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:05.031 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:05.031 00:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: 2s 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: ]] 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWM4YjAwMjIxMzhiZmQ0ODc0ZjBlMzcxZWJkZTk5YWRhZDBmNDhlYTFlMTZjNDNjQ9tKPg==: 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:05.031 00:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.936 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.194 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.194 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:07.194 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:07.195 00:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:08.573 nvme0n1 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:08.573 00:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:09.506 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:09.764 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:09.764 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:09.764 00:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.023 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.023 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:10.023 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.023 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:10.282 00:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:10.851 request: 00:23:10.851 { 00:23:10.851 "name": "nvme0", 00:23:10.851 "dhchap_key": "key1", 00:23:10.851 "dhchap_ctrlr_key": "key3", 00:23:10.851 "method": "bdev_nvme_set_keys", 00:23:10.851 "req_id": 1 00:23:10.851 } 00:23:10.851 Got JSON-RPC error response 00:23:10.851 response: 00:23:10.851 { 00:23:10.851 "code": -13, 00:23:10.851 "message": "Permission denied" 00:23:10.851 } 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.112 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:11.371 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:11.371 00:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:12.308 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:12.308 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:12.308 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:12.565 00:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:13.947 nvme0n1 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:13.947 00:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:14.886 request: 00:23:14.886 { 00:23:14.886 "name": "nvme0", 00:23:14.886 "dhchap_key": "key2", 00:23:14.886 "dhchap_ctrlr_key": "key0", 00:23:14.886 "method": "bdev_nvme_set_keys", 00:23:14.886 "req_id": 1 00:23:14.886 } 00:23:14.886 Got JSON-RPC error response 00:23:14.886 response: 00:23:14.886 { 00:23:14.886 "code": -13, 00:23:14.886 "message": "Permission denied" 00:23:14.886 } 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.886 00:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:15.144 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:15.144 00:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:16.082 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:16.082 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:16.082 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 251702 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 251702 ']' 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 251702 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251702 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251702' 00:23:16.341 killing process with pid 251702 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 251702 00:23:16.341 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 251702 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.912 rmmod nvme_tcp 00:23:16.912 rmmod nvme_fabrics 00:23:16.912 rmmod nvme_keyring 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 274764 ']' 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 274764 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 274764 ']' 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 274764 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274764 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274764' 00:23:16.912 killing process with pid 274764 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 274764 00:23:16.912 00:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 274764 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.171 00:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.079 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.079 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UAn /tmp/spdk.key-sha256.pGQ /tmp/spdk.key-sha384.AaU /tmp/spdk.key-sha512.Rg0 /tmp/spdk.key-sha512.U5g /tmp/spdk.key-sha384.8BU /tmp/spdk.key-sha256.MN8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:19.079 00:23:19.079 real 3m33.464s 00:23:19.079 user 8m19.325s 00:23:19.079 sys 0m27.728s 00:23:19.079 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.079 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.079 ************************************ 00:23:19.079 END TEST nvmf_auth_target 00:23:19.079 ************************************ 00:23:19.079 00:50:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:19.080 00:50:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:19.080 00:50:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:19.080 00:50:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.080 00:50:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:19.080 ************************************ 00:23:19.080 START TEST nvmf_bdevio_no_huge 00:23:19.080 ************************************ 00:23:19.080 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:19.339 * Looking for test storage... 00:23:19.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.339 --rc genhtml_branch_coverage=1 00:23:19.339 --rc genhtml_function_coverage=1 00:23:19.339 --rc genhtml_legend=1 00:23:19.339 --rc geninfo_all_blocks=1 00:23:19.339 --rc geninfo_unexecuted_blocks=1 00:23:19.339 00:23:19.339 ' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.339 --rc genhtml_branch_coverage=1 00:23:19.339 --rc genhtml_function_coverage=1 00:23:19.339 --rc genhtml_legend=1 00:23:19.339 --rc geninfo_all_blocks=1 00:23:19.339 --rc geninfo_unexecuted_blocks=1 00:23:19.339 00:23:19.339 ' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.339 --rc genhtml_branch_coverage=1 00:23:19.339 --rc genhtml_function_coverage=1 00:23:19.339 --rc genhtml_legend=1 00:23:19.339 --rc geninfo_all_blocks=1 00:23:19.339 --rc geninfo_unexecuted_blocks=1 00:23:19.339 00:23:19.339 ' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.339 --rc genhtml_branch_coverage=1 00:23:19.339 --rc genhtml_function_coverage=1 00:23:19.339 --rc genhtml_legend=1 00:23:19.339 --rc geninfo_all_blocks=1 00:23:19.339 --rc geninfo_unexecuted_blocks=1 00:23:19.339 00:23:19.339 ' 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.339 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.340 00:50:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.877 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:21.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:21.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:21.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:21.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:23:21.878 00:23:21.878 --- 10.0.0.2 ping statistics --- 00:23:21.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.878 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:21.878 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:21.878 00:23:21.879 --- 10.0.0.1 ping statistics --- 00:23:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.879 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=280017 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 280017 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 280017 ']' 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.879 [2024-12-07 00:50:37.714486] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:21.879 [2024-12-07 00:50:37.714579] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:21.879 [2024-12-07 00:50:37.790455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.879 [2024-12-07 00:50:37.836310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.879 [2024-12-07 00:50:37.836368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.879 [2024-12-07 00:50:37.836389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.879 [2024-12-07 00:50:37.836400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.879 [2024-12-07 00:50:37.836411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.879 [2024-12-07 00:50:37.837538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:21.879 [2024-12-07 00:50:37.837598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:21.879 [2024-12-07 00:50:37.837662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:21.879 [2024-12-07 00:50:37.837665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.879 00:50:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.879 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.879 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.879 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.879 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:21.879 [2024-12-07 00:50:38.020916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.139 Malloc0 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.139 [2024-12-07 00:50:38.059236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:22.139 { 00:23:22.139 "params": { 00:23:22.139 "name": "Nvme$subsystem", 00:23:22.139 "trtype": "$TEST_TRANSPORT", 00:23:22.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.139 "adrfam": "ipv4", 00:23:22.139 "trsvcid": "$NVMF_PORT", 00:23:22.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.139 "hdgst": ${hdgst:-false}, 00:23:22.139 "ddgst": ${ddgst:-false} 00:23:22.139 }, 00:23:22.139 "method": "bdev_nvme_attach_controller" 00:23:22.139 } 00:23:22.139 EOF 00:23:22.139 )") 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:22.139 00:50:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:22.139 "params": { 00:23:22.139 "name": "Nvme1", 00:23:22.139 "trtype": "tcp", 00:23:22.139 "traddr": "10.0.0.2", 00:23:22.139 "adrfam": "ipv4", 00:23:22.139 "trsvcid": "4420", 00:23:22.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.139 "hdgst": false, 00:23:22.139 "ddgst": false 00:23:22.139 }, 00:23:22.139 "method": "bdev_nvme_attach_controller" 00:23:22.139 }' 00:23:22.139 [2024-12-07 00:50:38.108192] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:22.139 [2024-12-07 00:50:38.108284] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid280055 ] 00:23:22.139 [2024-12-07 00:50:38.184443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:22.139 [2024-12-07 00:50:38.235467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.139 [2024-12-07 00:50:38.235515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.139 [2024-12-07 00:50:38.235518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.398 I/O targets: 00:23:22.398 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:22.398 00:23:22.398 00:23:22.398 CUnit - A unit testing framework for C - Version 2.1-3 00:23:22.398 http://cunit.sourceforge.net/ 00:23:22.398 00:23:22.398 00:23:22.398 Suite: bdevio tests on: Nvme1n1 00:23:22.398 Test: blockdev write read block ...passed 00:23:22.657 Test: blockdev write zeroes read block ...passed 00:23:22.657 Test: blockdev write zeroes read no split ...passed 00:23:22.657 Test: blockdev write zeroes read split ...passed 00:23:22.657 Test: blockdev write zeroes read split partial ...passed 00:23:22.657 Test: blockdev reset ...[2024-12-07 00:50:38.626455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:22.657 [2024-12-07 00:50:38.626570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1988ef0 (9): Bad file descriptor 00:23:22.657 [2024-12-07 00:50:38.643202] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:22.657 passed 00:23:22.657 Test: blockdev write read 8 blocks ...passed 00:23:22.657 Test: blockdev write read size > 128k ...passed 00:23:22.657 Test: blockdev write read invalid size ...passed 00:23:22.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:22.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:22.657 Test: blockdev write read max offset ...passed 00:23:22.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:22.916 Test: blockdev writev readv 8 blocks ...passed 00:23:22.916 Test: blockdev writev readv 30 x 1block ...passed 00:23:22.916 Test: blockdev writev readv block ...passed 00:23:22.916 Test: blockdev writev readv size > 128k ...passed 00:23:22.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:22.916 Test: blockdev comparev and writev ...[2024-12-07 00:50:38.854240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.854304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.854322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.854626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.854689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.854993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.855025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.855047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.916 [2024-12-07 00:50:38.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:22.916 [2024-12-07 00:50:38.855373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.917 [2024-12-07 00:50:38.855398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:22.917 [2024-12-07 00:50:38.855420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:22.917 [2024-12-07 00:50:38.855436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:22.917 passed 00:23:22.917 Test: blockdev nvme passthru rw ...passed 00:23:22.917 Test: blockdev nvme passthru vendor specific ...[2024-12-07 00:50:38.937252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.917 [2024-12-07 00:50:38.937279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:22.917 [2024-12-07 00:50:38.937414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.917 [2024-12-07 00:50:38.937436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:22.917 [2024-12-07 00:50:38.937572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.917 [2024-12-07 00:50:38.937596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:22.917 [2024-12-07 00:50:38.937732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:22.917 [2024-12-07 00:50:38.937756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:22.917 passed 00:23:22.917 Test: blockdev nvme admin passthru ...passed 00:23:22.917 Test: blockdev copy ...passed 00:23:22.917 00:23:22.917 Run Summary: Type Total Ran Passed Failed Inactive 00:23:22.917 suites 1 1 n/a 0 0 00:23:22.917 tests 23 23 23 0 0 00:23:22.917 asserts 152 152 152 0 n/a 00:23:22.917 00:23:22.917 Elapsed time = 1.068 seconds 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.175 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.175 rmmod nvme_tcp 00:23:23.434 rmmod nvme_fabrics 00:23:23.434 rmmod nvme_keyring 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 280017 ']' 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 280017 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 280017 ']' 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 280017 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280017 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280017' 00:23:23.434 killing process with pid 280017 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 280017 00:23:23.434 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 280017 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.693 00:50:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.230 00:23:26.230 real 0m6.578s 00:23:26.230 user 0m10.340s 00:23:26.230 sys 0m2.605s 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:26.230 ************************************ 00:23:26.230 END TEST nvmf_bdevio_no_huge 00:23:26.230 ************************************ 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:26.230 ************************************ 00:23:26.230 START TEST nvmf_tls 00:23:26.230 ************************************ 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:26.230 * Looking for test storage... 00:23:26.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.230 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.231 --rc genhtml_branch_coverage=1 00:23:26.231 --rc genhtml_function_coverage=1 00:23:26.231 --rc genhtml_legend=1 00:23:26.231 --rc geninfo_all_blocks=1 00:23:26.231 --rc geninfo_unexecuted_blocks=1 00:23:26.231 00:23:26.231 ' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.231 --rc genhtml_branch_coverage=1 00:23:26.231 --rc genhtml_function_coverage=1 00:23:26.231 --rc genhtml_legend=1 00:23:26.231 --rc geninfo_all_blocks=1 00:23:26.231 --rc geninfo_unexecuted_blocks=1 00:23:26.231 00:23:26.231 ' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.231 --rc genhtml_branch_coverage=1 00:23:26.231 --rc genhtml_function_coverage=1 00:23:26.231 --rc genhtml_legend=1 00:23:26.231 --rc geninfo_all_blocks=1 00:23:26.231 --rc geninfo_unexecuted_blocks=1 00:23:26.231 00:23:26.231 ' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.231 --rc genhtml_branch_coverage=1 00:23:26.231 --rc genhtml_function_coverage=1 00:23:26.231 --rc genhtml_legend=1 00:23:26.231 --rc geninfo_all_blocks=1 00:23:26.231 --rc geninfo_unexecuted_blocks=1 00:23:26.231 00:23:26.231 ' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.231 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.232 00:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.232 00:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:28.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:28.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:28.135 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:28.135 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.135 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.136 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.392 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.392 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:23:28.393 00:23:28.393 --- 10.0.0.2 ping statistics --- 00:23:28.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.393 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:23:28.393 00:23:28.393 --- 10.0.0.1 ping statistics --- 00:23:28.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.393 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=282240 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 282240 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 282240 ']' 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.393 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.393 [2024-12-07 00:50:44.397457] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:28.393 [2024-12-07 00:50:44.397552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.393 [2024-12-07 00:50:44.474850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.393 [2024-12-07 00:50:44.523205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.393 [2024-12-07 00:50:44.523267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.393 [2024-12-07 00:50:44.523294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.393 [2024-12-07 00:50:44.523305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.393 [2024-12-07 00:50:44.523314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.393 [2024-12-07 00:50:44.523896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:28.650 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:28.907 true 00:23:28.907 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:28.907 00:50:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:29.163 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:29.163 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:29.163 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:29.419 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:29.419 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:29.675 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:29.675 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:29.675 00:50:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:29.951 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:29.951 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:30.208 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:30.208 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:30.208 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:30.208 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:30.464 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:30.464 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:30.464 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:31.027 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:31.027 00:50:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:31.283 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:31.283 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:31.283 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:31.540 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:31.540 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QCi7ZY7E11 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8GyiEepOOA 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QCi7ZY7E11 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8GyiEepOOA 00:23:31.798 00:50:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:32.056 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:32.622 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QCi7ZY7E11 00:23:32.622 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QCi7ZY7E11 00:23:32.622 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.880 [2024-12-07 00:50:48.787658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.880 00:50:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.138 00:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:33.397 [2024-12-07 00:50:49.341148] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.397 [2024-12-07 00:50:49.341372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.397 00:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.655 malloc0 00:23:33.655 00:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.913 00:50:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QCi7ZY7E11 00:23:34.172 00:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.431 00:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QCi7ZY7E11 00:23:46.628 Initializing NVMe Controllers 00:23:46.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.628 Initialization complete. Launching workers. 00:23:46.628 ======================================================== 00:23:46.628 Latency(us) 00:23:46.628 Device Information : IOPS MiB/s Average min max 00:23:46.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8638.37 33.74 7410.87 1177.07 8526.88 00:23:46.628 ======================================================== 00:23:46.628 Total : 8638.37 33.74 7410.87 1177.07 8526.88 00:23:46.628 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QCi7ZY7E11 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QCi7ZY7E11 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=284186 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 284186 /var/tmp/bdevperf.sock 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 284186 ']' 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.628 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.629 [2024-12-07 00:51:00.708272] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:46.629 [2024-12-07 00:51:00.708377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284186 ] 00:23:46.629 [2024-12-07 00:51:00.778915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.629 [2024-12-07 00:51:00.827689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.629 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.629 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.629 00:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QCi7ZY7E11 00:23:46.629 00:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.629 [2024-12-07 00:51:01.492523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.629 TLSTESTn1 00:23:46.629 00:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:46.629 Running I/O for 10 seconds... 00:23:47.562 3255.00 IOPS, 12.71 MiB/s [2024-12-06T23:51:05.086Z] 3342.50 IOPS, 13.06 MiB/s [2024-12-06T23:51:06.035Z] 3358.00 IOPS, 13.12 MiB/s [2024-12-06T23:51:06.969Z] 3357.75 IOPS, 13.12 MiB/s [2024-12-06T23:51:07.903Z] 3295.20 IOPS, 12.87 MiB/s [2024-12-06T23:51:08.835Z] 3318.83 IOPS, 12.96 MiB/s [2024-12-06T23:51:09.767Z] 3310.71 IOPS, 12.93 MiB/s [2024-12-06T23:51:11.146Z] 3325.00 IOPS, 12.99 MiB/s [2024-12-06T23:51:12.081Z] 3323.44 IOPS, 12.98 MiB/s [2024-12-06T23:51:12.081Z] 3330.90 IOPS, 13.01 MiB/s 00:23:55.930 Latency(us) 00:23:55.930 [2024-12-06T23:51:12.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.930 Verification LBA range: start 0x0 length 0x2000 00:23:55.930 TLSTESTn1 : 10.02 3335.61 13.03 0.00 0.00 38304.50 9272.13 36117.62 00:23:55.930 [2024-12-06T23:51:12.081Z] =================================================================================================================== 00:23:55.930 [2024-12-06T23:51:12.081Z] Total : 3335.61 13.03 0.00 0.00 38304.50 9272.13 36117.62 00:23:55.930 { 00:23:55.930 "results": [ 00:23:55.930 { 00:23:55.930 "job": "TLSTESTn1", 00:23:55.930 "core_mask": "0x4", 00:23:55.930 "workload": "verify", 00:23:55.930 "status": "finished", 00:23:55.930 "verify_range": { 00:23:55.930 "start": 0, 00:23:55.930 "length": 8192 00:23:55.930 }, 00:23:55.930 "queue_depth": 128, 00:23:55.930 "io_size": 4096, 00:23:55.930 "runtime": 10.023653, 00:23:55.930 "iops": 3335.6102810023453, 00:23:55.930 "mibps": 13.029727660165412, 00:23:55.930 "io_failed": 0, 00:23:55.930 "io_timeout": 0, 00:23:55.930 "avg_latency_us": 38304.498151393804, 00:23:55.930 "min_latency_us": 9272.13037037037, 00:23:55.930 "max_latency_us": 36117.61777777778 00:23:55.930 } 00:23:55.930 ], 00:23:55.930 "core_count": 1 00:23:55.930 } 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 284186 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 284186 ']' 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 284186 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284186 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:55.930 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284186' 00:23:55.930 killing process with pid 284186 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 284186 00:23:55.931 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.931 00:23:55.931 Latency(us) 00:23:55.931 [2024-12-06T23:51:12.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.931 [2024-12-06T23:51:12.082Z] =================================================================================================================== 00:23:55.931 [2024-12-06T23:51:12.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 284186 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8GyiEepOOA 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8GyiEepOOA 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:55.931 00:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8GyiEepOOA 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8GyiEepOOA 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286078 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286078 /var/tmp/bdevperf.sock 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286078 ']' 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.931 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.931 [2024-12-07 00:51:12.049757] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:55.931 [2024-12-07 00:51:12.049838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286078 ] 00:23:56.190 [2024-12-07 00:51:12.117328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.190 [2024-12-07 00:51:12.162423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.190 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.190 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.190 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8GyiEepOOA 00:23:56.448 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.707 [2024-12-07 00:51:12.805439] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.707 [2024-12-07 00:51:12.811191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:56.707 [2024-12-07 00:51:12.811647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142a610 (107): Transport endpoint is not connected 00:23:56.707 [2024-12-07 00:51:12.812636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142a610 (9): Bad file descriptor 00:23:56.707 [2024-12-07 00:51:12.813636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:56.707 [2024-12-07 00:51:12.813656] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:56.707 [2024-12-07 00:51:12.813683] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:56.707 [2024-12-07 00:51:12.813702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:56.707 request: 00:23:56.707 { 00:23:56.707 "name": "TLSTEST", 00:23:56.707 "trtype": "tcp", 00:23:56.707 "traddr": "10.0.0.2", 00:23:56.707 "adrfam": "ipv4", 00:23:56.707 "trsvcid": "4420", 00:23:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.707 "prchk_reftag": false, 00:23:56.707 "prchk_guard": false, 00:23:56.707 "hdgst": false, 00:23:56.707 "ddgst": false, 00:23:56.707 "psk": "key0", 00:23:56.707 "allow_unrecognized_csi": false, 00:23:56.707 "method": "bdev_nvme_attach_controller", 00:23:56.707 "req_id": 1 00:23:56.707 } 00:23:56.707 Got JSON-RPC error response 00:23:56.707 response: 00:23:56.707 { 00:23:56.707 "code": -5, 00:23:56.707 "message": "Input/output error" 00:23:56.707 } 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 286078 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286078 ']' 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286078 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.707 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286078 00:23:56.966 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:56.966 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:56.966 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286078' 00:23:56.966 killing process with pid 286078 00:23:56.966 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286078 00:23:56.966 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.966 00:23:56.966 Latency(us) 00:23:56.966 [2024-12-06T23:51:13.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.966 [2024-12-06T23:51:13.117Z] =================================================================================================================== 00:23:56.966 [2024-12-06T23:51:13.117Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.966 00:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286078 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QCi7ZY7E11 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QCi7ZY7E11 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QCi7ZY7E11 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QCi7ZY7E11 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286216 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286216 /var/tmp/bdevperf.sock 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286216 ']' 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.966 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.225 [2024-12-07 00:51:13.123794] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:57.225 [2024-12-07 00:51:13.123878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286216 ] 00:23:57.225 [2024-12-07 00:51:13.192591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.225 [2024-12-07 00:51:13.240206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.225 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.225 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.225 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QCi7ZY7E11 00:23:57.483 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:58.051 [2024-12-07 00:51:13.894603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.051 [2024-12-07 00:51:13.901601] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:58.051 [2024-12-07 00:51:13.901631] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:58.051 [2024-12-07 00:51:13.901683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:58.051 [2024-12-07 00:51:13.902000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9610 (107): Transport endpoint is not connected 00:23:58.051 [2024-12-07 00:51:13.903001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9610 (9): Bad file descriptor 00:23:58.051 [2024-12-07 00:51:13.904002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:58.051 [2024-12-07 00:51:13.904028] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:58.051 [2024-12-07 00:51:13.904069] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:58.051 [2024-12-07 00:51:13.904088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:58.051 request: 00:23:58.051 { 00:23:58.051 "name": "TLSTEST", 00:23:58.051 "trtype": "tcp", 00:23:58.051 "traddr": "10.0.0.2", 00:23:58.051 "adrfam": "ipv4", 00:23:58.051 "trsvcid": "4420", 00:23:58.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.051 "prchk_reftag": false, 00:23:58.051 "prchk_guard": false, 00:23:58.051 "hdgst": false, 00:23:58.051 "ddgst": false, 00:23:58.051 "psk": "key0", 00:23:58.051 "allow_unrecognized_csi": false, 00:23:58.051 "method": "bdev_nvme_attach_controller", 00:23:58.051 "req_id": 1 00:23:58.051 } 00:23:58.051 Got JSON-RPC error response 00:23:58.051 response: 00:23:58.051 { 00:23:58.051 "code": -5, 00:23:58.051 "message": "Input/output error" 00:23:58.051 } 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 286216 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286216 ']' 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286216 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286216 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286216' 00:23:58.051 killing process with pid 286216 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286216 00:23:58.051 Received shutdown signal, test time was about 10.000000 seconds 00:23:58.051 00:23:58.051 Latency(us) 00:23:58.051 [2024-12-06T23:51:14.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.051 [2024-12-06T23:51:14.202Z] =================================================================================================================== 00:23:58.051 [2024-12-06T23:51:14.202Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:58.051 00:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286216 00:23:58.051 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:58.051 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QCi7ZY7E11 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QCi7ZY7E11 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QCi7ZY7E11 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QCi7ZY7E11 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286357 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286357 /var/tmp/bdevperf.sock 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286357 ']' 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.052 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.311 [2024-12-07 00:51:14.212289] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:58.311 [2024-12-07 00:51:14.212395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286357 ] 00:23:58.311 [2024-12-07 00:51:14.279115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.311 [2024-12-07 00:51:14.322640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.311 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.311 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:58.311 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QCi7ZY7E11 00:23:58.877 00:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.877 [2024-12-07 00:51:14.999494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.877 [2024-12-07 00:51:15.005093] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:58.877 [2024-12-07 00:51:15.005124] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:58.878 [2024-12-07 00:51:15.005163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:58.878 [2024-12-07 00:51:15.005702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1179610 (107): Transport endpoint is not connected 00:23:58.878 [2024-12-07 00:51:15.006691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1179610 (9): Bad file descriptor 00:23:58.878 [2024-12-07 00:51:15.007690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:58.878 [2024-12-07 00:51:15.007709] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:58.878 [2024-12-07 00:51:15.007737] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:58.878 [2024-12-07 00:51:15.007761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:58.878 request: 00:23:58.878 { 00:23:58.878 "name": "TLSTEST", 00:23:58.878 "trtype": "tcp", 00:23:58.878 "traddr": "10.0.0.2", 00:23:58.878 "adrfam": "ipv4", 00:23:58.878 "trsvcid": "4420", 00:23:58.878 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.878 "prchk_reftag": false, 00:23:58.878 "prchk_guard": false, 00:23:58.878 "hdgst": false, 00:23:58.878 "ddgst": false, 00:23:58.878 "psk": "key0", 00:23:58.878 "allow_unrecognized_csi": false, 00:23:58.878 "method": "bdev_nvme_attach_controller", 00:23:58.878 "req_id": 1 00:23:58.878 } 00:23:58.878 Got JSON-RPC error response 00:23:58.878 response: 00:23:58.878 { 00:23:58.878 "code": -5, 00:23:58.878 "message": "Input/output error" 00:23:58.878 } 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 286357 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286357 ']' 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286357 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286357 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286357' 00:23:59.140 killing process with pid 286357 00:23:59.140 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286357 00:23:59.140 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.140 00:23:59.140 Latency(us) 00:23:59.140 [2024-12-06T23:51:15.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.140 [2024-12-06T23:51:15.292Z] =================================================================================================================== 00:23:59.141 [2024-12-06T23:51:15.292Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286357 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286505 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286505 /var/tmp/bdevperf.sock 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286505 ']' 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.141 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.141 [2024-12-07 00:51:15.276903] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:59.141 [2024-12-07 00:51:15.276982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286505 ] 00:23:59.398 [2024-12-07 00:51:15.344799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.398 [2024-12-07 00:51:15.392079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.398 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.398 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.398 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:59.656 [2024-12-07 00:51:15.777459] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:59.656 [2024-12-07 00:51:15.777512] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:59.656 request: 00:23:59.656 { 00:23:59.656 "name": "key0", 00:23:59.656 "path": "", 00:23:59.656 "method": "keyring_file_add_key", 00:23:59.656 "req_id": 1 00:23:59.656 } 00:23:59.656 Got JSON-RPC error response 00:23:59.656 response: 00:23:59.656 { 00:23:59.656 "code": -1, 00:23:59.656 "message": "Operation not permitted" 00:23:59.656 } 00:23:59.656 00:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.912 [2024-12-07 00:51:16.050259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.913 [2024-12-07 00:51:16.050332] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:59.913 request: 00:23:59.913 { 00:23:59.913 "name": "TLSTEST", 00:23:59.913 "trtype": "tcp", 00:23:59.913 "traddr": "10.0.0.2", 00:23:59.913 "adrfam": "ipv4", 00:23:59.913 "trsvcid": "4420", 00:23:59.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.913 "prchk_reftag": false, 00:23:59.913 "prchk_guard": false, 00:23:59.913 "hdgst": false, 00:23:59.913 "ddgst": false, 00:23:59.913 "psk": "key0", 00:23:59.913 "allow_unrecognized_csi": false, 00:23:59.913 "method": "bdev_nvme_attach_controller", 00:23:59.913 "req_id": 1 00:23:59.913 } 00:23:59.913 Got JSON-RPC error response 00:23:59.913 response: 00:23:59.913 { 00:23:59.913 "code": -126, 00:23:59.913 "message": "Required key not available" 00:23:59.913 } 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 286505 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286505 ']' 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286505 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286505 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286505' 00:24:00.170 killing process with pid 286505 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286505 00:24:00.170 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.170 00:24:00.170 Latency(us) 00:24:00.170 [2024-12-06T23:51:16.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.170 [2024-12-06T23:51:16.321Z] =================================================================================================================== 00:24:00.170 [2024-12-06T23:51:16.321Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286505 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 282240 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 282240 ']' 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 282240 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.170 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282240 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282240' 00:24:00.429 killing process with pid 282240 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 282240 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 282240 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:00.429 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.q1AZ7btZQL 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.q1AZ7btZQL 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=286652 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 286652 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286652 ']' 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.688 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.688 [2024-12-07 00:51:16.663461] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:00.688 [2024-12-07 00:51:16.663534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.688 [2024-12-07 00:51:16.735818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.688 [2024-12-07 00:51:16.781091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.688 [2024-12-07 00:51:16.781146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.688 [2024-12-07 00:51:16.781160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.688 [2024-12-07 00:51:16.781172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.688 [2024-12-07 00:51:16.781182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.688 [2024-12-07 00:51:16.781812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q1AZ7btZQL 00:24:00.945 00:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:01.203 [2024-12-07 00:51:17.209964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.203 00:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:01.461 00:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:01.719 [2024-12-07 00:51:17.803523] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.719 [2024-12-07 00:51:17.803758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.719 00:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.284 malloc0 00:24:02.284 00:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:02.542 00:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:02.801 00:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q1AZ7btZQL 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q1AZ7btZQL 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286989 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286989 /var/tmp/bdevperf.sock 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 286989 ']' 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.059 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.059 [2024-12-07 00:51:19.055393] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:03.059 [2024-12-07 00:51:19.055493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286989 ] 00:24:03.059 [2024-12-07 00:51:19.125965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.059 [2024-12-07 00:51:19.175905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.317 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.317 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.317 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:03.575 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.833 [2024-12-07 00:51:19.810241] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.833 TLSTESTn1 00:24:03.833 00:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.091 Running I/O for 10 seconds... 00:24:05.956 3198.00 IOPS, 12.49 MiB/s [2024-12-06T23:51:23.040Z] 3223.50 IOPS, 12.59 MiB/s [2024-12-06T23:51:24.415Z] 3309.00 IOPS, 12.93 MiB/s [2024-12-06T23:51:25.347Z] 3375.75 IOPS, 13.19 MiB/s [2024-12-06T23:51:26.280Z] 3354.00 IOPS, 13.10 MiB/s [2024-12-06T23:51:27.212Z] 3367.50 IOPS, 13.15 MiB/s [2024-12-06T23:51:28.144Z] 3390.29 IOPS, 13.24 MiB/s [2024-12-06T23:51:29.092Z] 3409.12 IOPS, 13.32 MiB/s [2024-12-06T23:51:30.024Z] 3391.22 IOPS, 13.25 MiB/s [2024-12-06T23:51:30.281Z] 3382.80 IOPS, 13.21 MiB/s 00:24:14.130 Latency(us) 00:24:14.130 [2024-12-06T23:51:30.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.130 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.130 Verification LBA range: start 0x0 length 0x2000 00:24:14.130 TLSTESTn1 : 10.03 3386.41 13.23 0.00 0.00 37725.16 6213.78 33787.45 00:24:14.130 [2024-12-06T23:51:30.281Z] =================================================================================================================== 00:24:14.130 [2024-12-06T23:51:30.281Z] Total : 3386.41 13.23 0.00 0.00 37725.16 6213.78 33787.45 00:24:14.130 { 00:24:14.130 "results": [ 00:24:14.130 { 00:24:14.130 "job": "TLSTESTn1", 00:24:14.130 "core_mask": "0x4", 00:24:14.130 "workload": "verify", 00:24:14.130 "status": "finished", 00:24:14.130 "verify_range": { 00:24:14.130 "start": 0, 00:24:14.130 "length": 8192 00:24:14.130 }, 00:24:14.130 "queue_depth": 128, 00:24:14.130 "io_size": 4096, 00:24:14.130 "runtime": 10.026555, 00:24:14.130 "iops": 3386.4073951621467, 00:24:14.130 "mibps": 13.228153887352136, 00:24:14.130 "io_failed": 0, 00:24:14.130 "io_timeout": 0, 00:24:14.130 "avg_latency_us": 37725.16411999677, 00:24:14.130 "min_latency_us": 6213.783703703703, 00:24:14.130 "max_latency_us": 33787.44888888889 00:24:14.130 } 00:24:14.130 ], 00:24:14.130 "core_count": 1 00:24:14.130 } 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 286989 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286989 ']' 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286989 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286989 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286989' 00:24:14.130 killing process with pid 286989 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286989 00:24:14.130 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.130 00:24:14.130 Latency(us) 00:24:14.130 [2024-12-06T23:51:30.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.130 [2024-12-06T23:51:30.281Z] =================================================================================================================== 00:24:14.130 [2024-12-06T23:51:30.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.130 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286989 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.q1AZ7btZQL 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q1AZ7btZQL 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q1AZ7btZQL 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q1AZ7btZQL 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.q1AZ7btZQL 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=288272 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 288272 /var/tmp/bdevperf.sock 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 288272 ']' 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.388 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.388 [2024-12-07 00:51:30.365504] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:14.388 [2024-12-07 00:51:30.365589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288272 ] 00:24:14.388 [2024-12-07 00:51:30.434730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.388 [2024-12-07 00:51:30.481451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.645 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.645 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:14.645 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:14.902 [2024-12-07 00:51:30.859869] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.q1AZ7btZQL': 0100666 00:24:14.902 [2024-12-07 00:51:30.859916] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:14.902 request: 00:24:14.902 { 00:24:14.902 "name": "key0", 00:24:14.902 "path": "/tmp/tmp.q1AZ7btZQL", 00:24:14.902 "method": "keyring_file_add_key", 00:24:14.902 "req_id": 1 00:24:14.902 } 00:24:14.902 Got JSON-RPC error response 00:24:14.902 response: 00:24:14.902 { 00:24:14.902 "code": -1, 00:24:14.902 "message": "Operation not permitted" 00:24:14.902 } 00:24:14.902 00:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.159 [2024-12-07 00:51:31.136687] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.159 [2024-12-07 00:51:31.136745] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:15.159 request: 00:24:15.159 { 00:24:15.159 "name": "TLSTEST", 00:24:15.159 "trtype": "tcp", 00:24:15.159 "traddr": "10.0.0.2", 00:24:15.159 "adrfam": "ipv4", 00:24:15.159 "trsvcid": "4420", 00:24:15.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.159 "prchk_reftag": false, 00:24:15.159 "prchk_guard": false, 00:24:15.159 "hdgst": false, 00:24:15.159 "ddgst": false, 00:24:15.159 "psk": "key0", 00:24:15.159 "allow_unrecognized_csi": false, 00:24:15.159 "method": "bdev_nvme_attach_controller", 00:24:15.159 "req_id": 1 00:24:15.159 } 00:24:15.159 Got JSON-RPC error response 00:24:15.159 response: 00:24:15.159 { 00:24:15.159 "code": -126, 00:24:15.160 "message": "Required key not available" 00:24:15.160 } 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 288272 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 288272 ']' 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 288272 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288272 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288272' 00:24:15.160 killing process with pid 288272 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 288272 00:24:15.160 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.160 00:24:15.160 Latency(us) 00:24:15.160 [2024-12-06T23:51:31.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.160 [2024-12-06T23:51:31.311Z] =================================================================================================================== 00:24:15.160 [2024-12-06T23:51:31.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.160 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 288272 00:24:15.416 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:15.416 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:15.416 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.416 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.416 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 286652 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 286652 ']' 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 286652 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286652 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286652' 00:24:15.417 killing process with pid 286652 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 286652 00:24:15.417 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 286652 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=288467 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 288467 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 288467 ']' 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.674 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.674 [2024-12-07 00:51:31.642069] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:15.674 [2024-12-07 00:51:31.642150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.674 [2024-12-07 00:51:31.716814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.674 [2024-12-07 00:51:31.764597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.674 [2024-12-07 00:51:31.764654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.674 [2024-12-07 00:51:31.764667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.674 [2024-12-07 00:51:31.764679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.674 [2024-12-07 00:51:31.764688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.674 [2024-12-07 00:51:31.765279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q1AZ7btZQL 00:24:15.931 00:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.188 [2024-12-07 00:51:32.166212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.188 00:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.445 00:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.702 [2024-12-07 00:51:32.695643] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.702 [2024-12-07 00:51:32.695896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.702 00:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.961 malloc0 00:24:16.961 00:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:17.219 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:17.476 [2024-12-07 00:51:33.499648] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.q1AZ7btZQL': 0100666 00:24:17.476 [2024-12-07 00:51:33.499681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:17.476 request: 00:24:17.476 { 00:24:17.476 "name": "key0", 00:24:17.476 "path": "/tmp/tmp.q1AZ7btZQL", 00:24:17.476 "method": "keyring_file_add_key", 00:24:17.476 "req_id": 1 00:24:17.476 } 00:24:17.476 Got JSON-RPC error response 00:24:17.476 response: 00:24:17.476 { 00:24:17.476 "code": -1, 00:24:17.476 "message": "Operation not permitted" 00:24:17.476 } 00:24:17.476 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.733 [2024-12-07 00:51:33.768404] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:17.733 [2024-12-07 00:51:33.768455] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:17.733 request: 00:24:17.733 { 00:24:17.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.734 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.734 "psk": "key0", 00:24:17.734 "method": "nvmf_subsystem_add_host", 00:24:17.734 "req_id": 1 00:24:17.734 } 00:24:17.734 Got JSON-RPC error response 00:24:17.734 response: 00:24:17.734 { 00:24:17.734 "code": -32603, 00:24:17.734 "message": "Internal error" 00:24:17.734 } 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 288467 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 288467 ']' 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 288467 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288467 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288467' 00:24:17.734 killing process with pid 288467 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 288467 00:24:17.734 00:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 288467 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.q1AZ7btZQL 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=288833 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 288833 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 288833 ']' 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.992 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.992 [2024-12-07 00:51:34.093152] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:17.992 [2024-12-07 00:51:34.093266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.250 [2024-12-07 00:51:34.164861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.250 [2024-12-07 00:51:34.205310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.250 [2024-12-07 00:51:34.205372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.250 [2024-12-07 00:51:34.205395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.250 [2024-12-07 00:51:34.205413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.250 [2024-12-07 00:51:34.205422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.250 [2024-12-07 00:51:34.205958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q1AZ7btZQL 00:24:18.250 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.508 [2024-12-07 00:51:34.593511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.508 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:18.766 00:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:19.025 [2024-12-07 00:51:35.138978] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.025 [2024-12-07 00:51:35.139245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.025 00:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:19.282 malloc0 00:24:19.540 00:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:19.796 00:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:20.053 00:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=289124 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 289124 /var/tmp/bdevperf.sock 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 289124 ']' 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.311 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.311 [2024-12-07 00:51:36.289051] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:20.311 [2024-12-07 00:51:36.289119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289124 ] 00:24:20.311 [2024-12-07 00:51:36.354647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.311 [2024-12-07 00:51:36.399251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.568 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.568 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.568 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:20.825 00:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.083 [2024-12-07 00:51:37.036724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.083 TLSTESTn1 00:24:21.083 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:21.341 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:21.341 "subsystems": [ 00:24:21.341 { 00:24:21.341 "subsystem": "keyring", 00:24:21.341 "config": [ 00:24:21.341 { 00:24:21.341 "method": "keyring_file_add_key", 00:24:21.341 "params": { 00:24:21.341 "name": "key0", 00:24:21.341 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:21.341 } 00:24:21.341 } 00:24:21.341 ] 00:24:21.341 }, 00:24:21.341 { 00:24:21.341 "subsystem": "iobuf", 00:24:21.341 "config": [ 00:24:21.341 { 00:24:21.341 "method": "iobuf_set_options", 00:24:21.341 "params": { 00:24:21.341 "small_pool_count": 8192, 00:24:21.341 "large_pool_count": 1024, 00:24:21.341 "small_bufsize": 8192, 00:24:21.341 "large_bufsize": 135168, 00:24:21.341 "enable_numa": false 00:24:21.341 } 00:24:21.341 } 00:24:21.341 ] 00:24:21.341 }, 00:24:21.341 { 00:24:21.341 "subsystem": "sock", 00:24:21.341 "config": [ 00:24:21.341 { 00:24:21.341 "method": "sock_set_default_impl", 00:24:21.341 "params": { 00:24:21.341 "impl_name": "posix" 00:24:21.341 } 00:24:21.341 }, 00:24:21.341 { 00:24:21.341 "method": "sock_impl_set_options", 00:24:21.341 "params": { 00:24:21.341 "impl_name": "ssl", 00:24:21.341 "recv_buf_size": 4096, 00:24:21.341 "send_buf_size": 4096, 00:24:21.341 "enable_recv_pipe": true, 00:24:21.341 "enable_quickack": false, 00:24:21.341 "enable_placement_id": 0, 00:24:21.341 "enable_zerocopy_send_server": true, 00:24:21.341 "enable_zerocopy_send_client": false, 00:24:21.341 "zerocopy_threshold": 0, 00:24:21.341 "tls_version": 0, 00:24:21.341 "enable_ktls": false 00:24:21.341 } 00:24:21.341 }, 00:24:21.341 { 00:24:21.341 "method": "sock_impl_set_options", 00:24:21.341 "params": { 00:24:21.341 "impl_name": "posix", 00:24:21.341 "recv_buf_size": 2097152, 00:24:21.341 "send_buf_size": 2097152, 00:24:21.341 "enable_recv_pipe": true, 00:24:21.341 "enable_quickack": false, 00:24:21.341 "enable_placement_id": 0, 00:24:21.341 "enable_zerocopy_send_server": true, 00:24:21.341 "enable_zerocopy_send_client": false, 00:24:21.342 "zerocopy_threshold": 0, 00:24:21.342 "tls_version": 0, 00:24:21.342 "enable_ktls": false 00:24:21.342 } 00:24:21.342 } 00:24:21.342 ] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "vmd", 00:24:21.342 "config": [] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "accel", 00:24:21.342 "config": [ 00:24:21.342 { 00:24:21.342 "method": "accel_set_options", 00:24:21.342 "params": { 00:24:21.342 "small_cache_size": 128, 00:24:21.342 "large_cache_size": 16, 00:24:21.342 "task_count": 2048, 00:24:21.342 "sequence_count": 2048, 00:24:21.342 "buf_count": 2048 00:24:21.342 } 00:24:21.342 } 00:24:21.342 ] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "bdev", 00:24:21.342 "config": [ 00:24:21.342 { 00:24:21.342 "method": "bdev_set_options", 00:24:21.342 "params": { 00:24:21.342 "bdev_io_pool_size": 65535, 00:24:21.342 "bdev_io_cache_size": 256, 00:24:21.342 "bdev_auto_examine": true, 00:24:21.342 "iobuf_small_cache_size": 128, 00:24:21.342 "iobuf_large_cache_size": 16 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_raid_set_options", 00:24:21.342 "params": { 00:24:21.342 "process_window_size_kb": 1024, 00:24:21.342 "process_max_bandwidth_mb_sec": 0 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_iscsi_set_options", 00:24:21.342 "params": { 00:24:21.342 "timeout_sec": 30 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_nvme_set_options", 00:24:21.342 "params": { 00:24:21.342 "action_on_timeout": "none", 00:24:21.342 "timeout_us": 0, 00:24:21.342 "timeout_admin_us": 0, 00:24:21.342 "keep_alive_timeout_ms": 10000, 00:24:21.342 "arbitration_burst": 0, 00:24:21.342 "low_priority_weight": 0, 00:24:21.342 "medium_priority_weight": 0, 00:24:21.342 "high_priority_weight": 0, 00:24:21.342 "nvme_adminq_poll_period_us": 10000, 00:24:21.342 "nvme_ioq_poll_period_us": 0, 00:24:21.342 "io_queue_requests": 0, 00:24:21.342 "delay_cmd_submit": true, 00:24:21.342 "transport_retry_count": 4, 00:24:21.342 "bdev_retry_count": 3, 00:24:21.342 "transport_ack_timeout": 0, 00:24:21.342 "ctrlr_loss_timeout_sec": 0, 00:24:21.342 "reconnect_delay_sec": 0, 00:24:21.342 "fast_io_fail_timeout_sec": 0, 00:24:21.342 "disable_auto_failback": false, 00:24:21.342 "generate_uuids": false, 00:24:21.342 "transport_tos": 0, 00:24:21.342 "nvme_error_stat": false, 00:24:21.342 "rdma_srq_size": 0, 00:24:21.342 "io_path_stat": false, 00:24:21.342 "allow_accel_sequence": false, 00:24:21.342 "rdma_max_cq_size": 0, 00:24:21.342 "rdma_cm_event_timeout_ms": 0, 00:24:21.342 "dhchap_digests": [ 00:24:21.342 "sha256", 00:24:21.342 "sha384", 00:24:21.342 "sha512" 00:24:21.342 ], 00:24:21.342 "dhchap_dhgroups": [ 00:24:21.342 "null", 00:24:21.342 "ffdhe2048", 00:24:21.342 "ffdhe3072", 00:24:21.342 "ffdhe4096", 00:24:21.342 "ffdhe6144", 00:24:21.342 "ffdhe8192" 00:24:21.342 ] 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_nvme_set_hotplug", 00:24:21.342 "params": { 00:24:21.342 "period_us": 100000, 00:24:21.342 "enable": false 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_malloc_create", 00:24:21.342 "params": { 00:24:21.342 "name": "malloc0", 00:24:21.342 "num_blocks": 8192, 00:24:21.342 "block_size": 4096, 00:24:21.342 "physical_block_size": 4096, 00:24:21.342 "uuid": "2147439d-1211-4114-9e9f-dd1eebd1ae0a", 00:24:21.342 "optimal_io_boundary": 0, 00:24:21.342 "md_size": 0, 00:24:21.342 "dif_type": 0, 00:24:21.342 "dif_is_head_of_md": false, 00:24:21.342 "dif_pi_format": 0 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "bdev_wait_for_examine" 00:24:21.342 } 00:24:21.342 ] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "nbd", 00:24:21.342 "config": [] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "scheduler", 00:24:21.342 "config": [ 00:24:21.342 { 00:24:21.342 "method": "framework_set_scheduler", 00:24:21.342 "params": { 00:24:21.342 "name": "static" 00:24:21.342 } 00:24:21.342 } 00:24:21.342 ] 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "subsystem": "nvmf", 00:24:21.342 "config": [ 00:24:21.342 { 00:24:21.342 "method": "nvmf_set_config", 00:24:21.342 "params": { 00:24:21.342 "discovery_filter": "match_any", 00:24:21.342 "admin_cmd_passthru": { 00:24:21.342 "identify_ctrlr": false 00:24:21.342 }, 00:24:21.342 "dhchap_digests": [ 00:24:21.342 "sha256", 00:24:21.342 "sha384", 00:24:21.342 "sha512" 00:24:21.342 ], 00:24:21.342 "dhchap_dhgroups": [ 00:24:21.342 "null", 00:24:21.342 "ffdhe2048", 00:24:21.342 "ffdhe3072", 00:24:21.342 "ffdhe4096", 00:24:21.342 "ffdhe6144", 00:24:21.342 "ffdhe8192" 00:24:21.342 ] 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "nvmf_set_max_subsystems", 00:24:21.342 "params": { 00:24:21.342 "max_subsystems": 1024 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "nvmf_set_crdt", 00:24:21.342 "params": { 00:24:21.342 "crdt1": 0, 00:24:21.342 "crdt2": 0, 00:24:21.342 "crdt3": 0 00:24:21.342 } 00:24:21.342 }, 00:24:21.342 { 00:24:21.342 "method": "nvmf_create_transport", 00:24:21.342 "params": { 00:24:21.342 "trtype": "TCP", 00:24:21.342 "max_queue_depth": 128, 00:24:21.342 "max_io_qpairs_per_ctrlr": 127, 00:24:21.342 "in_capsule_data_size": 4096, 00:24:21.342 "max_io_size": 131072, 00:24:21.342 "io_unit_size": 131072, 00:24:21.342 "max_aq_depth": 128, 00:24:21.342 "num_shared_buffers": 511, 00:24:21.342 "buf_cache_size": 4294967295, 00:24:21.342 "dif_insert_or_strip": false, 00:24:21.342 "zcopy": false, 00:24:21.342 "c2h_success": false, 00:24:21.343 "sock_priority": 0, 00:24:21.343 "abort_timeout_sec": 1, 00:24:21.343 "ack_timeout": 0, 00:24:21.343 "data_wr_pool_size": 0 00:24:21.343 } 00:24:21.343 }, 00:24:21.343 { 00:24:21.343 "method": "nvmf_create_subsystem", 00:24:21.343 "params": { 00:24:21.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.343 "allow_any_host": false, 00:24:21.343 "serial_number": "SPDK00000000000001", 00:24:21.343 "model_number": "SPDK bdev Controller", 00:24:21.343 "max_namespaces": 10, 00:24:21.343 "min_cntlid": 1, 00:24:21.343 "max_cntlid": 65519, 00:24:21.343 "ana_reporting": false 00:24:21.343 } 00:24:21.343 }, 00:24:21.343 { 00:24:21.343 "method": "nvmf_subsystem_add_host", 00:24:21.343 "params": { 00:24:21.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.343 "host": "nqn.2016-06.io.spdk:host1", 00:24:21.343 "psk": "key0" 00:24:21.343 } 00:24:21.343 }, 00:24:21.343 { 00:24:21.343 "method": "nvmf_subsystem_add_ns", 00:24:21.343 "params": { 00:24:21.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.343 "namespace": { 00:24:21.343 "nsid": 1, 00:24:21.343 "bdev_name": "malloc0", 00:24:21.343 "nguid": "2147439D121141149E9FDD1EEBD1AE0A", 00:24:21.343 "uuid": "2147439d-1211-4114-9e9f-dd1eebd1ae0a", 00:24:21.343 "no_auto_visible": false 00:24:21.343 } 00:24:21.343 } 00:24:21.343 }, 00:24:21.343 { 00:24:21.343 "method": "nvmf_subsystem_add_listener", 00:24:21.343 "params": { 00:24:21.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.343 "listen_address": { 00:24:21.343 "trtype": "TCP", 00:24:21.343 "adrfam": "IPv4", 00:24:21.343 "traddr": "10.0.0.2", 00:24:21.343 "trsvcid": "4420" 00:24:21.343 }, 00:24:21.343 "secure_channel": true 00:24:21.343 } 00:24:21.343 } 00:24:21.343 ] 00:24:21.343 } 00:24:21.343 ] 00:24:21.343 }' 00:24:21.343 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:21.909 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:21.909 "subsystems": [ 00:24:21.909 { 00:24:21.909 "subsystem": "keyring", 00:24:21.909 "config": [ 00:24:21.909 { 00:24:21.909 "method": "keyring_file_add_key", 00:24:21.909 "params": { 00:24:21.909 "name": "key0", 00:24:21.909 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:21.909 } 00:24:21.909 } 00:24:21.909 ] 00:24:21.909 }, 00:24:21.909 { 00:24:21.909 "subsystem": "iobuf", 00:24:21.909 "config": [ 00:24:21.909 { 00:24:21.909 "method": "iobuf_set_options", 00:24:21.909 "params": { 00:24:21.909 "small_pool_count": 8192, 00:24:21.909 "large_pool_count": 1024, 00:24:21.909 "small_bufsize": 8192, 00:24:21.909 "large_bufsize": 135168, 00:24:21.909 "enable_numa": false 00:24:21.909 } 00:24:21.909 } 00:24:21.909 ] 00:24:21.909 }, 00:24:21.909 { 00:24:21.909 "subsystem": "sock", 00:24:21.909 "config": [ 00:24:21.909 { 00:24:21.909 "method": "sock_set_default_impl", 00:24:21.909 "params": { 00:24:21.909 "impl_name": "posix" 00:24:21.909 } 00:24:21.909 }, 00:24:21.909 { 00:24:21.909 "method": "sock_impl_set_options", 00:24:21.909 "params": { 00:24:21.909 "impl_name": "ssl", 00:24:21.909 "recv_buf_size": 4096, 00:24:21.909 "send_buf_size": 4096, 00:24:21.909 "enable_recv_pipe": true, 00:24:21.909 "enable_quickack": false, 00:24:21.909 "enable_placement_id": 0, 00:24:21.909 "enable_zerocopy_send_server": true, 00:24:21.909 "enable_zerocopy_send_client": false, 00:24:21.909 "zerocopy_threshold": 0, 00:24:21.910 "tls_version": 0, 00:24:21.910 "enable_ktls": false 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "sock_impl_set_options", 00:24:21.910 "params": { 00:24:21.910 "impl_name": "posix", 00:24:21.910 "recv_buf_size": 2097152, 00:24:21.910 "send_buf_size": 2097152, 00:24:21.910 "enable_recv_pipe": true, 00:24:21.910 "enable_quickack": false, 00:24:21.910 "enable_placement_id": 0, 00:24:21.910 "enable_zerocopy_send_server": true, 00:24:21.910 "enable_zerocopy_send_client": false, 00:24:21.910 "zerocopy_threshold": 0, 00:24:21.910 "tls_version": 0, 00:24:21.910 "enable_ktls": false 00:24:21.910 } 00:24:21.910 } 00:24:21.910 ] 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "subsystem": "vmd", 00:24:21.910 "config": [] 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "subsystem": "accel", 00:24:21.910 "config": [ 00:24:21.910 { 00:24:21.910 "method": "accel_set_options", 00:24:21.910 "params": { 00:24:21.910 "small_cache_size": 128, 00:24:21.910 "large_cache_size": 16, 00:24:21.910 "task_count": 2048, 00:24:21.910 "sequence_count": 2048, 00:24:21.910 "buf_count": 2048 00:24:21.910 } 00:24:21.910 } 00:24:21.910 ] 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "subsystem": "bdev", 00:24:21.910 "config": [ 00:24:21.910 { 00:24:21.910 "method": "bdev_set_options", 00:24:21.910 "params": { 00:24:21.910 "bdev_io_pool_size": 65535, 00:24:21.910 "bdev_io_cache_size": 256, 00:24:21.910 "bdev_auto_examine": true, 00:24:21.910 "iobuf_small_cache_size": 128, 00:24:21.910 "iobuf_large_cache_size": 16 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_raid_set_options", 00:24:21.910 "params": { 00:24:21.910 "process_window_size_kb": 1024, 00:24:21.910 "process_max_bandwidth_mb_sec": 0 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_iscsi_set_options", 00:24:21.910 "params": { 00:24:21.910 "timeout_sec": 30 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_nvme_set_options", 00:24:21.910 "params": { 00:24:21.910 "action_on_timeout": "none", 00:24:21.910 "timeout_us": 0, 00:24:21.910 "timeout_admin_us": 0, 00:24:21.910 "keep_alive_timeout_ms": 10000, 00:24:21.910 "arbitration_burst": 0, 00:24:21.910 "low_priority_weight": 0, 00:24:21.910 "medium_priority_weight": 0, 00:24:21.910 "high_priority_weight": 0, 00:24:21.910 "nvme_adminq_poll_period_us": 10000, 00:24:21.910 "nvme_ioq_poll_period_us": 0, 00:24:21.910 "io_queue_requests": 512, 00:24:21.910 "delay_cmd_submit": true, 00:24:21.910 "transport_retry_count": 4, 00:24:21.910 "bdev_retry_count": 3, 00:24:21.910 "transport_ack_timeout": 0, 00:24:21.910 "ctrlr_loss_timeout_sec": 0, 00:24:21.910 "reconnect_delay_sec": 0, 00:24:21.910 "fast_io_fail_timeout_sec": 0, 00:24:21.910 "disable_auto_failback": false, 00:24:21.910 "generate_uuids": false, 00:24:21.910 "transport_tos": 0, 00:24:21.910 "nvme_error_stat": false, 00:24:21.910 "rdma_srq_size": 0, 00:24:21.910 "io_path_stat": false, 00:24:21.910 "allow_accel_sequence": false, 00:24:21.910 "rdma_max_cq_size": 0, 00:24:21.910 "rdma_cm_event_timeout_ms": 0, 00:24:21.910 "dhchap_digests": [ 00:24:21.910 "sha256", 00:24:21.910 "sha384", 00:24:21.910 "sha512" 00:24:21.910 ], 00:24:21.910 "dhchap_dhgroups": [ 00:24:21.910 "null", 00:24:21.910 "ffdhe2048", 00:24:21.910 "ffdhe3072", 00:24:21.910 "ffdhe4096", 00:24:21.910 "ffdhe6144", 00:24:21.910 "ffdhe8192" 00:24:21.910 ] 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_nvme_attach_controller", 00:24:21.910 "params": { 00:24:21.910 "name": "TLSTEST", 00:24:21.910 "trtype": "TCP", 00:24:21.910 "adrfam": "IPv4", 00:24:21.910 "traddr": "10.0.0.2", 00:24:21.910 "trsvcid": "4420", 00:24:21.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.910 "prchk_reftag": false, 00:24:21.910 "prchk_guard": false, 00:24:21.910 "ctrlr_loss_timeout_sec": 0, 00:24:21.910 "reconnect_delay_sec": 0, 00:24:21.910 "fast_io_fail_timeout_sec": 0, 00:24:21.910 "psk": "key0", 00:24:21.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.910 "hdgst": false, 00:24:21.910 "ddgst": false, 00:24:21.910 "multipath": "multipath" 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_nvme_set_hotplug", 00:24:21.910 "params": { 00:24:21.910 "period_us": 100000, 00:24:21.910 "enable": false 00:24:21.910 } 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "method": "bdev_wait_for_examine" 00:24:21.910 } 00:24:21.910 ] 00:24:21.910 }, 00:24:21.910 { 00:24:21.910 "subsystem": "nbd", 00:24:21.910 "config": [] 00:24:21.910 } 00:24:21.910 ] 00:24:21.910 }' 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 289124 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 289124 ']' 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 289124 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289124 00:24:21.910 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:21.911 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:21.911 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289124' 00:24:21.911 killing process with pid 289124 00:24:21.911 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 289124 00:24:21.911 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.911 00:24:21.911 Latency(us) 00:24:21.911 [2024-12-06T23:51:38.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.911 [2024-12-06T23:51:38.062Z] =================================================================================================================== 00:24:21.911 [2024-12-06T23:51:38.062Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.911 00:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 289124 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 288833 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 288833 ']' 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 288833 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.911 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288833 00:24:22.169 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288833' 00:24:22.170 killing process with pid 288833 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 288833 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 288833 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.170 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:22.170 "subsystems": [ 00:24:22.170 { 00:24:22.170 "subsystem": "keyring", 00:24:22.170 "config": [ 00:24:22.170 { 00:24:22.170 "method": "keyring_file_add_key", 00:24:22.170 "params": { 00:24:22.170 "name": "key0", 00:24:22.170 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:22.170 } 00:24:22.170 } 00:24:22.170 ] 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "subsystem": "iobuf", 00:24:22.170 "config": [ 00:24:22.170 { 00:24:22.170 "method": "iobuf_set_options", 00:24:22.170 "params": { 00:24:22.170 "small_pool_count": 8192, 00:24:22.170 "large_pool_count": 1024, 00:24:22.170 "small_bufsize": 8192, 00:24:22.170 "large_bufsize": 135168, 00:24:22.170 "enable_numa": false 00:24:22.170 } 00:24:22.170 } 00:24:22.170 ] 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "subsystem": "sock", 00:24:22.170 "config": [ 00:24:22.170 { 00:24:22.170 "method": "sock_set_default_impl", 00:24:22.170 "params": { 00:24:22.170 "impl_name": "posix" 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "method": "sock_impl_set_options", 00:24:22.170 "params": { 00:24:22.170 "impl_name": "ssl", 00:24:22.170 "recv_buf_size": 4096, 00:24:22.170 "send_buf_size": 4096, 00:24:22.170 "enable_recv_pipe": true, 00:24:22.170 "enable_quickack": false, 00:24:22.170 "enable_placement_id": 0, 00:24:22.170 "enable_zerocopy_send_server": true, 00:24:22.170 "enable_zerocopy_send_client": false, 00:24:22.170 "zerocopy_threshold": 0, 00:24:22.170 "tls_version": 0, 00:24:22.170 "enable_ktls": false 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "method": "sock_impl_set_options", 00:24:22.170 "params": { 00:24:22.170 "impl_name": "posix", 00:24:22.170 "recv_buf_size": 2097152, 00:24:22.170 "send_buf_size": 2097152, 00:24:22.170 "enable_recv_pipe": true, 00:24:22.170 "enable_quickack": false, 00:24:22.170 "enable_placement_id": 0, 00:24:22.170 "enable_zerocopy_send_server": true, 00:24:22.170 "enable_zerocopy_send_client": false, 00:24:22.170 "zerocopy_threshold": 0, 00:24:22.170 "tls_version": 0, 00:24:22.170 "enable_ktls": false 00:24:22.170 } 00:24:22.170 } 00:24:22.170 ] 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "subsystem": "vmd", 00:24:22.170 "config": [] 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "subsystem": "accel", 00:24:22.170 "config": [ 00:24:22.170 { 00:24:22.170 "method": "accel_set_options", 00:24:22.170 "params": { 00:24:22.170 "small_cache_size": 128, 00:24:22.170 "large_cache_size": 16, 00:24:22.170 "task_count": 2048, 00:24:22.170 "sequence_count": 2048, 00:24:22.170 "buf_count": 2048 00:24:22.170 } 00:24:22.170 } 00:24:22.170 ] 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "subsystem": "bdev", 00:24:22.170 "config": [ 00:24:22.170 { 00:24:22.170 "method": "bdev_set_options", 00:24:22.170 "params": { 00:24:22.170 "bdev_io_pool_size": 65535, 00:24:22.170 "bdev_io_cache_size": 256, 00:24:22.170 "bdev_auto_examine": true, 00:24:22.170 "iobuf_small_cache_size": 128, 00:24:22.170 "iobuf_large_cache_size": 16 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "method": "bdev_raid_set_options", 00:24:22.170 "params": { 00:24:22.170 "process_window_size_kb": 1024, 00:24:22.170 "process_max_bandwidth_mb_sec": 0 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "method": "bdev_iscsi_set_options", 00:24:22.170 "params": { 00:24:22.170 "timeout_sec": 30 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.170 "method": "bdev_nvme_set_options", 00:24:22.170 "params": { 00:24:22.170 "action_on_timeout": "none", 00:24:22.170 "timeout_us": 0, 00:24:22.170 "timeout_admin_us": 0, 00:24:22.170 "keep_alive_timeout_ms": 10000, 00:24:22.170 "arbitration_burst": 0, 00:24:22.170 "low_priority_weight": 0, 00:24:22.170 "medium_priority_weight": 0, 00:24:22.170 "high_priority_weight": 0, 00:24:22.170 "nvme_adminq_poll_period_us": 10000, 00:24:22.170 "nvme_ioq_poll_period_us": 0, 00:24:22.170 "io_queue_requests": 0, 00:24:22.170 "delay_cmd_submit": true, 00:24:22.170 "transport_retry_count": 4, 00:24:22.170 "bdev_retry_count": 3, 00:24:22.170 "transport_ack_timeout": 0, 00:24:22.170 "ctrlr_loss_timeout_sec": 0, 00:24:22.170 "reconnect_delay_sec": 0, 00:24:22.170 "fast_io_fail_timeout_sec": 0, 00:24:22.170 "disable_auto_failback": false, 00:24:22.170 "generate_uuids": false, 00:24:22.170 "transport_tos": 0, 00:24:22.170 "nvme_error_stat": false, 00:24:22.170 "rdma_srq_size": 0, 00:24:22.170 "io_path_stat": false, 00:24:22.170 "allow_accel_sequence": false, 00:24:22.170 "rdma_max_cq_size": 0, 00:24:22.170 "rdma_cm_event_timeout_ms": 0, 00:24:22.170 "dhchap_digests": [ 00:24:22.170 "sha256", 00:24:22.170 "sha384", 00:24:22.170 "sha512" 00:24:22.170 ], 00:24:22.170 "dhchap_dhgroups": [ 00:24:22.170 "null", 00:24:22.170 "ffdhe2048", 00:24:22.170 "ffdhe3072", 00:24:22.170 "ffdhe4096", 00:24:22.170 "ffdhe6144", 00:24:22.170 "ffdhe8192" 00:24:22.170 ] 00:24:22.170 } 00:24:22.170 }, 00:24:22.170 { 00:24:22.171 "method": "bdev_nvme_set_hotplug", 00:24:22.171 "params": { 00:24:22.171 "period_us": 100000, 00:24:22.171 "enable": false 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "bdev_malloc_create", 00:24:22.171 "params": { 00:24:22.171 "name": "malloc0", 00:24:22.171 "num_blocks": 8192, 00:24:22.171 "block_size": 4096, 00:24:22.171 "physical_block_size": 4096, 00:24:22.171 "uuid": "2147439d-1211-4114-9e9f-dd1eebd1ae0a", 00:24:22.171 "optimal_io_boundary": 0, 00:24:22.171 "md_size": 0, 00:24:22.171 "dif_type": 0, 00:24:22.171 "dif_is_head_of_md": false, 00:24:22.171 "dif_pi_format": 0 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "bdev_wait_for_examine" 00:24:22.171 } 00:24:22.171 ] 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "subsystem": "nbd", 00:24:22.171 "config": [] 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "subsystem": "scheduler", 00:24:22.171 "config": [ 00:24:22.171 { 00:24:22.171 "method": "framework_set_scheduler", 00:24:22.171 "params": { 00:24:22.171 "name": "static" 00:24:22.171 } 00:24:22.171 } 00:24:22.171 ] 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "subsystem": "nvmf", 00:24:22.171 "config": [ 00:24:22.171 { 00:24:22.171 "method": "nvmf_set_config", 00:24:22.171 "params": { 00:24:22.171 "discovery_filter": "match_any", 00:24:22.171 "admin_cmd_passthru": { 00:24:22.171 "identify_ctrlr": false 00:24:22.171 }, 00:24:22.171 "dhchap_digests": [ 00:24:22.171 "sha256", 00:24:22.171 "sha384", 00:24:22.171 "sha512" 00:24:22.171 ], 00:24:22.171 "dhchap_dhgroups": [ 00:24:22.171 "null", 00:24:22.171 "ffdhe2048", 00:24:22.171 "ffdhe3072", 00:24:22.171 "ffdhe4096", 00:24:22.171 "ffdhe6144", 00:24:22.171 "ffdhe8192" 00:24:22.171 ] 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_set_max_subsystems", 00:24:22.171 "params": { 00:24:22.171 "max_subsystems": 1024 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_set_crdt", 00:24:22.171 "params": { 00:24:22.171 "crdt1": 0, 00:24:22.171 "crdt2": 0, 00:24:22.171 "crdt3": 0 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_create_transport", 00:24:22.171 "params": { 00:24:22.171 "trtype": "TCP", 00:24:22.171 "max_queue_depth": 128, 00:24:22.171 "max_io_qpairs_per_ctrlr": 127, 00:24:22.171 "in_capsule_data_size": 4096, 00:24:22.171 "max_io_size": 131072, 00:24:22.171 "io_unit_size": 131072, 00:24:22.171 "max_aq_depth": 128, 00:24:22.171 "num_shared_buffers": 511, 00:24:22.171 "buf_cache_size": 4294967295, 00:24:22.171 "dif_insert_or_strip": false, 00:24:22.171 "zcopy": false, 00:24:22.171 "c2h_success": false, 00:24:22.171 "sock_priority": 0, 00:24:22.171 "abort_timeout_sec": 1, 00:24:22.171 "ack_timeout": 0, 00:24:22.171 "data_wr_pool_size": 0 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_create_subsystem", 00:24:22.171 "params": { 00:24:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.171 "allow_any_host": false, 00:24:22.171 "serial_number": "SPDK00000000000001", 00:24:22.171 "model_number": "SPDK bdev Controller", 00:24:22.171 "max_namespaces": 10, 00:24:22.171 "min_cntlid": 1, 00:24:22.171 "max_cntlid": 65519, 00:24:22.171 "ana_reporting": false 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_subsystem_add_host", 00:24:22.171 "params": { 00:24:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.171 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.171 "psk": "key0" 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_subsystem_add_ns", 00:24:22.171 "params": { 00:24:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.171 "namespace": { 00:24:22.171 "nsid": 1, 00:24:22.171 "bdev_name": "malloc0", 00:24:22.171 "nguid": "2147439D121141149E9FDD1EEBD1AE0A", 00:24:22.171 "uuid": "2147439d-1211-4114-9e9f-dd1eebd1ae0a", 00:24:22.171 "no_auto_visible": false 00:24:22.171 } 00:24:22.171 } 00:24:22.171 }, 00:24:22.171 { 00:24:22.171 "method": "nvmf_subsystem_add_listener", 00:24:22.171 "params": { 00:24:22.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.171 "listen_address": { 00:24:22.171 "trtype": "TCP", 00:24:22.171 "adrfam": "IPv4", 00:24:22.171 "traddr": "10.0.0.2", 00:24:22.171 "trsvcid": "4420" 00:24:22.171 }, 00:24:22.171 "secure_channel": true 00:24:22.171 } 00:24:22.171 } 00:24:22.171 ] 00:24:22.171 } 00:24:22.171 ] 00:24:22.171 }' 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=289286 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 289286 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 289286 ']' 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.171 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.172 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.172 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.172 00:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.172 [2024-12-07 00:51:38.305764] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:22.172 [2024-12-07 00:51:38.305865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.430 [2024-12-07 00:51:38.381857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.430 [2024-12-07 00:51:38.427824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.430 [2024-12-07 00:51:38.427897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.430 [2024-12-07 00:51:38.427924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.430 [2024-12-07 00:51:38.427935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.430 [2024-12-07 00:51:38.427944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.430 [2024-12-07 00:51:38.428581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.688 [2024-12-07 00:51:38.668067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.688 [2024-12-07 00:51:38.700106] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.688 [2024-12-07 00:51:38.700359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.254 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=289436 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 289436 /var/tmp/bdevperf.sock 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 289436 ']' 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.255 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:23.255 "subsystems": [ 00:24:23.255 { 00:24:23.255 "subsystem": "keyring", 00:24:23.255 "config": [ 00:24:23.255 { 00:24:23.255 "method": "keyring_file_add_key", 00:24:23.255 "params": { 00:24:23.255 "name": "key0", 00:24:23.255 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:23.255 } 00:24:23.255 } 00:24:23.255 ] 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "subsystem": "iobuf", 00:24:23.255 "config": [ 00:24:23.255 { 00:24:23.255 "method": "iobuf_set_options", 00:24:23.255 "params": { 00:24:23.255 "small_pool_count": 8192, 00:24:23.255 "large_pool_count": 1024, 00:24:23.255 "small_bufsize": 8192, 00:24:23.255 "large_bufsize": 135168, 00:24:23.255 "enable_numa": false 00:24:23.255 } 00:24:23.255 } 00:24:23.255 ] 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "subsystem": "sock", 00:24:23.255 "config": [ 00:24:23.255 { 00:24:23.255 "method": "sock_set_default_impl", 00:24:23.255 "params": { 00:24:23.255 "impl_name": "posix" 00:24:23.255 } 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "method": "sock_impl_set_options", 00:24:23.255 "params": { 00:24:23.255 "impl_name": "ssl", 00:24:23.255 "recv_buf_size": 4096, 00:24:23.255 "send_buf_size": 4096, 00:24:23.255 "enable_recv_pipe": true, 00:24:23.255 "enable_quickack": false, 00:24:23.255 "enable_placement_id": 0, 00:24:23.255 "enable_zerocopy_send_server": true, 00:24:23.255 "enable_zerocopy_send_client": false, 00:24:23.255 "zerocopy_threshold": 0, 00:24:23.255 "tls_version": 0, 00:24:23.255 "enable_ktls": false 00:24:23.255 } 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "method": "sock_impl_set_options", 00:24:23.255 "params": { 00:24:23.255 "impl_name": "posix", 00:24:23.255 "recv_buf_size": 2097152, 00:24:23.255 "send_buf_size": 2097152, 00:24:23.255 "enable_recv_pipe": true, 00:24:23.255 "enable_quickack": false, 00:24:23.255 "enable_placement_id": 0, 00:24:23.255 "enable_zerocopy_send_server": true, 00:24:23.255 "enable_zerocopy_send_client": false, 00:24:23.255 "zerocopy_threshold": 0, 00:24:23.255 "tls_version": 0, 00:24:23.255 "enable_ktls": false 00:24:23.255 } 00:24:23.255 } 00:24:23.255 ] 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "subsystem": "vmd", 00:24:23.255 "config": [] 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "subsystem": "accel", 00:24:23.255 "config": [ 00:24:23.255 { 00:24:23.255 "method": "accel_set_options", 00:24:23.255 "params": { 00:24:23.255 "small_cache_size": 128, 00:24:23.255 "large_cache_size": 16, 00:24:23.255 "task_count": 2048, 00:24:23.255 "sequence_count": 2048, 00:24:23.255 "buf_count": 2048 00:24:23.255 } 00:24:23.255 } 00:24:23.255 ] 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "subsystem": "bdev", 00:24:23.255 "config": [ 00:24:23.255 { 00:24:23.255 "method": "bdev_set_options", 00:24:23.255 "params": { 00:24:23.255 "bdev_io_pool_size": 65535, 00:24:23.255 "bdev_io_cache_size": 256, 00:24:23.255 "bdev_auto_examine": true, 00:24:23.255 "iobuf_small_cache_size": 128, 00:24:23.255 "iobuf_large_cache_size": 16 00:24:23.255 } 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "method": "bdev_raid_set_options", 00:24:23.255 "params": { 00:24:23.255 "process_window_size_kb": 1024, 00:24:23.255 "process_max_bandwidth_mb_sec": 0 00:24:23.255 } 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "method": "bdev_iscsi_set_options", 00:24:23.255 "params": { 00:24:23.255 "timeout_sec": 30 00:24:23.255 } 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "method": "bdev_nvme_set_options", 00:24:23.255 "params": { 00:24:23.255 "action_on_timeout": "none", 00:24:23.255 "timeout_us": 0, 00:24:23.255 "timeout_admin_us": 0, 00:24:23.255 "keep_alive_timeout_ms": 10000, 00:24:23.255 "arbitration_burst": 0, 00:24:23.255 "low_priority_weight": 0, 00:24:23.255 "medium_priority_weight": 0, 00:24:23.255 "high_priority_weight": 0, 00:24:23.255 "nvme_adminq_poll_period_us": 10000, 00:24:23.255 "nvme_ioq_poll_period_us": 0, 00:24:23.255 "io_queue_requests": 512, 00:24:23.255 "delay_cmd_submit": true, 00:24:23.255 "transport_retry_count": 4, 00:24:23.255 "bdev_retry_count": 3, 00:24:23.255 "transport_ack_timeout": 0, 00:24:23.255 "ctrlr_loss_timeout_sec": 0, 00:24:23.255 "reconnect_delay_sec": 0, 00:24:23.255 "fast_io_fail_timeout_sec": 0, 00:24:23.255 "disable_auto_failback": false, 00:24:23.255 "generate_uuids": false, 00:24:23.255 "transport_tos": 0, 00:24:23.255 "nvme_error_stat": false, 00:24:23.255 "rdma_srq_size": 0, 00:24:23.255 "io_path_stat": false, 00:24:23.255 "allow_accel_sequence": false, 00:24:23.255 "rdma_max_cq_size": 0, 00:24:23.255 "rdma_cm_event_timeout_ms": 0, 00:24:23.255 "dhchap_digests": [ 00:24:23.255 "sha256", 00:24:23.255 "sha384", 00:24:23.255 "sha512" 00:24:23.255 ], 00:24:23.255 "dhchap_dhgroups": [ 00:24:23.255 "null", 00:24:23.255 "ffdhe2048", 00:24:23.255 "ffdhe3072", 00:24:23.255 "ffdhe4096", 00:24:23.256 "ffdhe6144", 00:24:23.256 "ffdhe8192" 00:24:23.256 ] 00:24:23.256 } 00:24:23.256 }, 00:24:23.256 { 00:24:23.256 "method": "bdev_nvme_attach_controller", 00:24:23.256 "params": { 00:24:23.256 "name": "TLSTEST", 00:24:23.256 "trtype": "TCP", 00:24:23.256 "adrfam": "IPv4", 00:24:23.256 "traddr": "10.0.0.2", 00:24:23.256 "trsvcid": "4420", 00:24:23.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.256 "prchk_reftag": false, 00:24:23.256 "prchk_guard": false, 00:24:23.256 "ctrlr_loss_timeout_sec": 0, 00:24:23.256 "reconnect_delay_sec": 0, 00:24:23.256 "fast_io_fail_timeout_sec": 0, 00:24:23.256 "psk": "key0", 00:24:23.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.256 "hdgst": false, 00:24:23.256 "ddgst": false, 00:24:23.256 "multipath": "multipath" 00:24:23.256 } 00:24:23.256 }, 00:24:23.256 { 00:24:23.256 "method": "bdev_nvme_set_hotplug", 00:24:23.256 "params": { 00:24:23.256 "period_us": 100000, 00:24:23.256 "enable": false 00:24:23.256 } 00:24:23.256 }, 00:24:23.256 { 00:24:23.256 "method": "bdev_wait_for_examine" 00:24:23.256 } 00:24:23.256 ] 00:24:23.256 }, 00:24:23.256 { 00:24:23.256 "subsystem": "nbd", 00:24:23.256 "config": [] 00:24:23.256 } 00:24:23.256 ] 00:24:23.256 }' 00:24:23.256 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.256 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.256 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.514 [2024-12-07 00:51:39.440739] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:23.514 [2024-12-07 00:51:39.440830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289436 ] 00:24:23.514 [2024-12-07 00:51:39.511240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.514 [2024-12-07 00:51:39.557142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.773 [2024-12-07 00:51:39.727429] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.773 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.773 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.773 00:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:24.031 Running I/O for 10 seconds... 00:24:25.903 3548.00 IOPS, 13.86 MiB/s [2024-12-06T23:51:42.986Z] 3489.50 IOPS, 13.63 MiB/s [2024-12-06T23:51:44.358Z] 3504.00 IOPS, 13.69 MiB/s [2024-12-06T23:51:45.290Z] 3526.00 IOPS, 13.77 MiB/s [2024-12-06T23:51:46.219Z] 3519.60 IOPS, 13.75 MiB/s [2024-12-06T23:51:47.152Z] 3524.00 IOPS, 13.77 MiB/s [2024-12-06T23:51:48.087Z] 3537.57 IOPS, 13.82 MiB/s [2024-12-06T23:51:49.021Z] 3527.75 IOPS, 13.78 MiB/s [2024-12-06T23:51:50.396Z] 3525.11 IOPS, 13.77 MiB/s [2024-12-06T23:51:50.396Z] 3522.60 IOPS, 13.76 MiB/s 00:24:34.245 Latency(us) 00:24:34.245 [2024-12-06T23:51:50.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.245 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.245 Verification LBA range: start 0x0 length 0x2000 00:24:34.245 TLSTESTn1 : 10.03 3525.66 13.77 0.00 0.00 36236.03 5971.06 40583.77 00:24:34.245 [2024-12-06T23:51:50.396Z] =================================================================================================================== 00:24:34.245 [2024-12-06T23:51:50.396Z] Total : 3525.66 13.77 0.00 0.00 36236.03 5971.06 40583.77 00:24:34.245 { 00:24:34.245 "results": [ 00:24:34.245 { 00:24:34.245 "job": "TLSTESTn1", 00:24:34.245 "core_mask": "0x4", 00:24:34.245 "workload": "verify", 00:24:34.245 "status": "finished", 00:24:34.245 "verify_range": { 00:24:34.245 "start": 0, 00:24:34.245 "length": 8192 00:24:34.245 }, 00:24:34.245 "queue_depth": 128, 00:24:34.245 "io_size": 4096, 00:24:34.245 "runtime": 10.027355, 00:24:34.246 "iops": 3525.6555691904796, 00:24:34.246 "mibps": 13.77209206715031, 00:24:34.246 "io_failed": 0, 00:24:34.246 "io_timeout": 0, 00:24:34.246 "avg_latency_us": 36236.02502238272, 00:24:34.246 "min_latency_us": 5971.057777777778, 00:24:34.246 "max_latency_us": 40583.77481481482 00:24:34.246 } 00:24:34.246 ], 00:24:34.246 "core_count": 1 00:24:34.246 } 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 289436 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 289436 ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 289436 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289436 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289436' 00:24:34.246 killing process with pid 289436 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 289436 00:24:34.246 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.246 00:24:34.246 Latency(us) 00:24:34.246 [2024-12-06T23:51:50.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.246 [2024-12-06T23:51:50.397Z] =================================================================================================================== 00:24:34.246 [2024-12-06T23:51:50.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 289436 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 289286 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 289286 ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 289286 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289286 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289286' 00:24:34.246 killing process with pid 289286 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 289286 00:24:34.246 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 289286 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=290753 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 290753 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 290753 ']' 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.505 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.505 [2024-12-07 00:51:50.571977] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:34.505 [2024-12-07 00:51:50.572117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.505 [2024-12-07 00:51:50.645213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.764 [2024-12-07 00:51:50.689350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.764 [2024-12-07 00:51:50.689403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.764 [2024-12-07 00:51:50.689419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.764 [2024-12-07 00:51:50.689432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.764 [2024-12-07 00:51:50.689443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.764 [2024-12-07 00:51:50.690171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.q1AZ7btZQL 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.q1AZ7btZQL 00:24:34.764 00:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:35.022 [2024-12-07 00:51:51.087151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.022 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.280 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.538 [2024-12-07 00:51:51.628611] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.538 [2024-12-07 00:51:51.628868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.538 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.796 malloc0 00:24:35.796 00:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:36.053 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:36.620 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=291048 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 291048 /var/tmp/bdevperf.sock 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 291048 ']' 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.929 00:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.929 [2024-12-07 00:51:52.831037] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:36.929 [2024-12-07 00:51:52.831118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291048 ] 00:24:36.929 [2024-12-07 00:51:52.900292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.929 [2024-12-07 00:51:52.946084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.929 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.929 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.929 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:37.492 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:37.492 [2024-12-07 00:51:53.580965] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.749 nvme0n1 00:24:37.749 00:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.749 Running I/O for 1 seconds... 00:24:38.681 3227.00 IOPS, 12.61 MiB/s 00:24:38.681 Latency(us) 00:24:38.681 [2024-12-06T23:51:54.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.681 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:38.681 Verification LBA range: start 0x0 length 0x2000 00:24:38.681 nvme0n1 : 1.02 3295.01 12.87 0.00 0.00 38515.09 6602.15 50098.63 00:24:38.681 [2024-12-06T23:51:54.832Z] =================================================================================================================== 00:24:38.681 [2024-12-06T23:51:54.832Z] Total : 3295.01 12.87 0.00 0.00 38515.09 6602.15 50098.63 00:24:38.681 { 00:24:38.681 "results": [ 00:24:38.681 { 00:24:38.681 "job": "nvme0n1", 00:24:38.681 "core_mask": "0x2", 00:24:38.681 "workload": "verify", 00:24:38.681 "status": "finished", 00:24:38.681 "verify_range": { 00:24:38.681 "start": 0, 00:24:38.681 "length": 8192 00:24:38.681 }, 00:24:38.681 "queue_depth": 128, 00:24:38.681 "io_size": 4096, 00:24:38.681 "runtime": 1.018206, 00:24:38.681 "iops": 3295.0110292023423, 00:24:38.681 "mibps": 12.87113683282165, 00:24:38.681 "io_failed": 0, 00:24:38.681 "io_timeout": 0, 00:24:38.681 "avg_latency_us": 38515.09156350389, 00:24:38.681 "min_latency_us": 6602.145185185185, 00:24:38.681 "max_latency_us": 50098.63111111111 00:24:38.681 } 00:24:38.681 ], 00:24:38.681 "core_count": 1 00:24:38.681 } 00:24:38.681 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 291048 00:24:38.681 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 291048 ']' 00:24:38.681 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 291048 00:24:38.681 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.681 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291048 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291048' 00:24:38.939 killing process with pid 291048 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 291048 00:24:38.939 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.939 00:24:38.939 Latency(us) 00:24:38.939 [2024-12-06T23:51:55.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.939 [2024-12-06T23:51:55.090Z] =================================================================================================================== 00:24:38.939 [2024-12-06T23:51:55.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.939 00:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 291048 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 290753 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 290753 ']' 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 290753 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.939 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290753 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290753' 00:24:39.198 killing process with pid 290753 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 290753 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 290753 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=291325 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 291325 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 291325 ']' 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.198 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.456 [2024-12-07 00:51:55.370106] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:39.456 [2024-12-07 00:51:55.370208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.456 [2024-12-07 00:51:55.444545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.456 [2024-12-07 00:51:55.490838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.456 [2024-12-07 00:51:55.490888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.456 [2024-12-07 00:51:55.490912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.456 [2024-12-07 00:51:55.490922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.456 [2024-12-07 00:51:55.490931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.456 [2024-12-07 00:51:55.491544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.456 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.456 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:39.456 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.456 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.456 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.713 [2024-12-07 00:51:55.628591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.713 malloc0 00:24:39.713 [2024-12-07 00:51:55.660104] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.713 [2024-12-07 00:51:55.660371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=291460 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 291460 /var/tmp/bdevperf.sock 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 291460 ']' 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.713 00:51:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.713 [2024-12-07 00:51:55.730597] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:39.713 [2024-12-07 00:51:55.730659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291460 ] 00:24:39.713 [2024-12-07 00:51:55.796855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.713 [2024-12-07 00:51:55.842461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.970 00:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.970 00:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:39.970 00:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q1AZ7btZQL 00:24:40.227 00:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:40.484 [2024-12-07 00:51:56.561467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.741 nvme0n1 00:24:40.742 00:51:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.742 Running I/O for 1 seconds... 00:24:41.671 3262.00 IOPS, 12.74 MiB/s 00:24:41.671 Latency(us) 00:24:41.671 [2024-12-06T23:51:57.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.671 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:41.671 Verification LBA range: start 0x0 length 0x2000 00:24:41.671 nvme0n1 : 1.03 3304.17 12.91 0.00 0.00 38253.32 6602.15 30874.74 00:24:41.671 [2024-12-06T23:51:57.822Z] =================================================================================================================== 00:24:41.671 [2024-12-06T23:51:57.822Z] Total : 3304.17 12.91 0.00 0.00 38253.32 6602.15 30874.74 00:24:41.671 { 00:24:41.671 "results": [ 00:24:41.671 { 00:24:41.671 "job": "nvme0n1", 00:24:41.671 "core_mask": "0x2", 00:24:41.671 "workload": "verify", 00:24:41.671 "status": "finished", 00:24:41.671 "verify_range": { 00:24:41.671 "start": 0, 00:24:41.671 "length": 8192 00:24:41.671 }, 00:24:41.671 "queue_depth": 128, 00:24:41.671 "io_size": 4096, 00:24:41.671 "runtime": 1.025977, 00:24:41.671 "iops": 3304.167637286216, 00:24:41.671 "mibps": 12.906904833149282, 00:24:41.671 "io_failed": 0, 00:24:41.671 "io_timeout": 0, 00:24:41.671 "avg_latency_us": 38253.31952365345, 00:24:41.671 "min_latency_us": 6602.145185185185, 00:24:41.671 "max_latency_us": 30874.737777777777 00:24:41.671 } 00:24:41.671 ], 00:24:41.671 "core_count": 1 00:24:41.671 } 00:24:41.671 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:41.671 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.671 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.928 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.928 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:41.928 "subsystems": [ 00:24:41.928 { 00:24:41.928 "subsystem": "keyring", 00:24:41.928 "config": [ 00:24:41.928 { 00:24:41.928 "method": "keyring_file_add_key", 00:24:41.928 "params": { 00:24:41.928 "name": "key0", 00:24:41.928 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:41.928 } 00:24:41.928 } 00:24:41.928 ] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "iobuf", 00:24:41.928 "config": [ 00:24:41.928 { 00:24:41.928 "method": "iobuf_set_options", 00:24:41.928 "params": { 00:24:41.928 "small_pool_count": 8192, 00:24:41.928 "large_pool_count": 1024, 00:24:41.928 "small_bufsize": 8192, 00:24:41.928 "large_bufsize": 135168, 00:24:41.928 "enable_numa": false 00:24:41.928 } 00:24:41.928 } 00:24:41.928 ] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "sock", 00:24:41.928 "config": [ 00:24:41.928 { 00:24:41.928 "method": "sock_set_default_impl", 00:24:41.928 "params": { 00:24:41.928 "impl_name": "posix" 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "sock_impl_set_options", 00:24:41.928 "params": { 00:24:41.928 "impl_name": "ssl", 00:24:41.928 "recv_buf_size": 4096, 00:24:41.928 "send_buf_size": 4096, 00:24:41.928 "enable_recv_pipe": true, 00:24:41.928 "enable_quickack": false, 00:24:41.928 "enable_placement_id": 0, 00:24:41.928 "enable_zerocopy_send_server": true, 00:24:41.928 "enable_zerocopy_send_client": false, 00:24:41.928 "zerocopy_threshold": 0, 00:24:41.928 "tls_version": 0, 00:24:41.928 "enable_ktls": false 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "sock_impl_set_options", 00:24:41.928 "params": { 00:24:41.928 "impl_name": "posix", 00:24:41.928 "recv_buf_size": 2097152, 00:24:41.928 "send_buf_size": 2097152, 00:24:41.928 "enable_recv_pipe": true, 00:24:41.928 "enable_quickack": false, 00:24:41.928 "enable_placement_id": 0, 00:24:41.928 "enable_zerocopy_send_server": true, 00:24:41.928 "enable_zerocopy_send_client": false, 00:24:41.928 "zerocopy_threshold": 0, 00:24:41.928 "tls_version": 0, 00:24:41.928 "enable_ktls": false 00:24:41.928 } 00:24:41.928 } 00:24:41.928 ] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "vmd", 00:24:41.928 "config": [] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "accel", 00:24:41.928 "config": [ 00:24:41.928 { 00:24:41.928 "method": "accel_set_options", 00:24:41.928 "params": { 00:24:41.928 "small_cache_size": 128, 00:24:41.928 "large_cache_size": 16, 00:24:41.928 "task_count": 2048, 00:24:41.928 "sequence_count": 2048, 00:24:41.928 "buf_count": 2048 00:24:41.928 } 00:24:41.928 } 00:24:41.928 ] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "bdev", 00:24:41.928 "config": [ 00:24:41.928 { 00:24:41.928 "method": "bdev_set_options", 00:24:41.928 "params": { 00:24:41.928 "bdev_io_pool_size": 65535, 00:24:41.928 "bdev_io_cache_size": 256, 00:24:41.928 "bdev_auto_examine": true, 00:24:41.928 "iobuf_small_cache_size": 128, 00:24:41.928 "iobuf_large_cache_size": 16 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_raid_set_options", 00:24:41.928 "params": { 00:24:41.928 "process_window_size_kb": 1024, 00:24:41.928 "process_max_bandwidth_mb_sec": 0 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_iscsi_set_options", 00:24:41.928 "params": { 00:24:41.928 "timeout_sec": 30 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_nvme_set_options", 00:24:41.928 "params": { 00:24:41.928 "action_on_timeout": "none", 00:24:41.928 "timeout_us": 0, 00:24:41.928 "timeout_admin_us": 0, 00:24:41.928 "keep_alive_timeout_ms": 10000, 00:24:41.928 "arbitration_burst": 0, 00:24:41.928 "low_priority_weight": 0, 00:24:41.928 "medium_priority_weight": 0, 00:24:41.928 "high_priority_weight": 0, 00:24:41.928 "nvme_adminq_poll_period_us": 10000, 00:24:41.928 "nvme_ioq_poll_period_us": 0, 00:24:41.928 "io_queue_requests": 0, 00:24:41.928 "delay_cmd_submit": true, 00:24:41.928 "transport_retry_count": 4, 00:24:41.928 "bdev_retry_count": 3, 00:24:41.928 "transport_ack_timeout": 0, 00:24:41.928 "ctrlr_loss_timeout_sec": 0, 00:24:41.928 "reconnect_delay_sec": 0, 00:24:41.928 "fast_io_fail_timeout_sec": 0, 00:24:41.928 "disable_auto_failback": false, 00:24:41.928 "generate_uuids": false, 00:24:41.928 "transport_tos": 0, 00:24:41.928 "nvme_error_stat": false, 00:24:41.928 "rdma_srq_size": 0, 00:24:41.928 "io_path_stat": false, 00:24:41.928 "allow_accel_sequence": false, 00:24:41.928 "rdma_max_cq_size": 0, 00:24:41.928 "rdma_cm_event_timeout_ms": 0, 00:24:41.928 "dhchap_digests": [ 00:24:41.928 "sha256", 00:24:41.928 "sha384", 00:24:41.928 "sha512" 00:24:41.928 ], 00:24:41.928 "dhchap_dhgroups": [ 00:24:41.928 "null", 00:24:41.928 "ffdhe2048", 00:24:41.928 "ffdhe3072", 00:24:41.928 "ffdhe4096", 00:24:41.928 "ffdhe6144", 00:24:41.928 "ffdhe8192" 00:24:41.928 ] 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_nvme_set_hotplug", 00:24:41.928 "params": { 00:24:41.928 "period_us": 100000, 00:24:41.928 "enable": false 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_malloc_create", 00:24:41.928 "params": { 00:24:41.928 "name": "malloc0", 00:24:41.928 "num_blocks": 8192, 00:24:41.928 "block_size": 4096, 00:24:41.928 "physical_block_size": 4096, 00:24:41.928 "uuid": "7c101c85-760a-468c-aae2-750558b30f95", 00:24:41.928 "optimal_io_boundary": 0, 00:24:41.928 "md_size": 0, 00:24:41.928 "dif_type": 0, 00:24:41.928 "dif_is_head_of_md": false, 00:24:41.928 "dif_pi_format": 0 00:24:41.928 } 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "method": "bdev_wait_for_examine" 00:24:41.928 } 00:24:41.928 ] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "nbd", 00:24:41.928 "config": [] 00:24:41.928 }, 00:24:41.928 { 00:24:41.928 "subsystem": "scheduler", 00:24:41.929 "config": [ 00:24:41.929 { 00:24:41.929 "method": "framework_set_scheduler", 00:24:41.929 "params": { 00:24:41.929 "name": "static" 00:24:41.929 } 00:24:41.929 } 00:24:41.929 ] 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "subsystem": "nvmf", 00:24:41.929 "config": [ 00:24:41.929 { 00:24:41.929 "method": "nvmf_set_config", 00:24:41.929 "params": { 00:24:41.929 "discovery_filter": "match_any", 00:24:41.929 "admin_cmd_passthru": { 00:24:41.929 "identify_ctrlr": false 00:24:41.929 }, 00:24:41.929 "dhchap_digests": [ 00:24:41.929 "sha256", 00:24:41.929 "sha384", 00:24:41.929 "sha512" 00:24:41.929 ], 00:24:41.929 "dhchap_dhgroups": [ 00:24:41.929 "null", 00:24:41.929 "ffdhe2048", 00:24:41.929 "ffdhe3072", 00:24:41.929 "ffdhe4096", 00:24:41.929 "ffdhe6144", 00:24:41.929 "ffdhe8192" 00:24:41.929 ] 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_set_max_subsystems", 00:24:41.929 "params": { 00:24:41.929 "max_subsystems": 1024 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_set_crdt", 00:24:41.929 "params": { 00:24:41.929 "crdt1": 0, 00:24:41.929 "crdt2": 0, 00:24:41.929 "crdt3": 0 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_create_transport", 00:24:41.929 "params": { 00:24:41.929 "trtype": "TCP", 00:24:41.929 "max_queue_depth": 128, 00:24:41.929 "max_io_qpairs_per_ctrlr": 127, 00:24:41.929 "in_capsule_data_size": 4096, 00:24:41.929 "max_io_size": 131072, 00:24:41.929 "io_unit_size": 131072, 00:24:41.929 "max_aq_depth": 128, 00:24:41.929 "num_shared_buffers": 511, 00:24:41.929 "buf_cache_size": 4294967295, 00:24:41.929 "dif_insert_or_strip": false, 00:24:41.929 "zcopy": false, 00:24:41.929 "c2h_success": false, 00:24:41.929 "sock_priority": 0, 00:24:41.929 "abort_timeout_sec": 1, 00:24:41.929 "ack_timeout": 0, 00:24:41.929 "data_wr_pool_size": 0 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_create_subsystem", 00:24:41.929 "params": { 00:24:41.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.929 "allow_any_host": false, 00:24:41.929 "serial_number": "00000000000000000000", 00:24:41.929 "model_number": "SPDK bdev Controller", 00:24:41.929 "max_namespaces": 32, 00:24:41.929 "min_cntlid": 1, 00:24:41.929 "max_cntlid": 65519, 00:24:41.929 "ana_reporting": false 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_subsystem_add_host", 00:24:41.929 "params": { 00:24:41.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.929 "host": "nqn.2016-06.io.spdk:host1", 00:24:41.929 "psk": "key0" 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_subsystem_add_ns", 00:24:41.929 "params": { 00:24:41.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.929 "namespace": { 00:24:41.929 "nsid": 1, 00:24:41.929 "bdev_name": "malloc0", 00:24:41.929 "nguid": "7C101C85760A468CAAE2750558B30F95", 00:24:41.929 "uuid": "7c101c85-760a-468c-aae2-750558b30f95", 00:24:41.929 "no_auto_visible": false 00:24:41.929 } 00:24:41.929 } 00:24:41.929 }, 00:24:41.929 { 00:24:41.929 "method": "nvmf_subsystem_add_listener", 00:24:41.929 "params": { 00:24:41.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.929 "listen_address": { 00:24:41.929 "trtype": "TCP", 00:24:41.929 "adrfam": "IPv4", 00:24:41.929 "traddr": "10.0.0.2", 00:24:41.929 "trsvcid": "4420" 00:24:41.929 }, 00:24:41.929 "secure_channel": false, 00:24:41.929 "sock_impl": "ssl" 00:24:41.929 } 00:24:41.929 } 00:24:41.929 ] 00:24:41.929 } 00:24:41.929 ] 00:24:41.929 }' 00:24:41.929 00:51:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:42.185 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:42.185 "subsystems": [ 00:24:42.185 { 00:24:42.185 "subsystem": "keyring", 00:24:42.185 "config": [ 00:24:42.185 { 00:24:42.185 "method": "keyring_file_add_key", 00:24:42.185 "params": { 00:24:42.185 "name": "key0", 00:24:42.185 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:42.185 } 00:24:42.185 } 00:24:42.185 ] 00:24:42.185 }, 00:24:42.185 { 00:24:42.185 "subsystem": "iobuf", 00:24:42.185 "config": [ 00:24:42.185 { 00:24:42.185 "method": "iobuf_set_options", 00:24:42.185 "params": { 00:24:42.185 "small_pool_count": 8192, 00:24:42.185 "large_pool_count": 1024, 00:24:42.185 "small_bufsize": 8192, 00:24:42.185 "large_bufsize": 135168, 00:24:42.185 "enable_numa": false 00:24:42.185 } 00:24:42.185 } 00:24:42.185 ] 00:24:42.185 }, 00:24:42.185 { 00:24:42.185 "subsystem": "sock", 00:24:42.185 "config": [ 00:24:42.185 { 00:24:42.185 "method": "sock_set_default_impl", 00:24:42.185 "params": { 00:24:42.185 "impl_name": "posix" 00:24:42.185 } 00:24:42.185 }, 00:24:42.185 { 00:24:42.185 "method": "sock_impl_set_options", 00:24:42.185 "params": { 00:24:42.185 "impl_name": "ssl", 00:24:42.185 "recv_buf_size": 4096, 00:24:42.185 "send_buf_size": 4096, 00:24:42.185 "enable_recv_pipe": true, 00:24:42.186 "enable_quickack": false, 00:24:42.186 "enable_placement_id": 0, 00:24:42.186 "enable_zerocopy_send_server": true, 00:24:42.186 "enable_zerocopy_send_client": false, 00:24:42.186 "zerocopy_threshold": 0, 00:24:42.186 "tls_version": 0, 00:24:42.186 "enable_ktls": false 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "sock_impl_set_options", 00:24:42.186 "params": { 00:24:42.186 "impl_name": "posix", 00:24:42.186 "recv_buf_size": 2097152, 00:24:42.186 "send_buf_size": 2097152, 00:24:42.186 "enable_recv_pipe": true, 00:24:42.186 "enable_quickack": false, 00:24:42.186 "enable_placement_id": 0, 00:24:42.186 "enable_zerocopy_send_server": true, 00:24:42.186 "enable_zerocopy_send_client": false, 00:24:42.186 "zerocopy_threshold": 0, 00:24:42.186 "tls_version": 0, 00:24:42.186 "enable_ktls": false 00:24:42.186 } 00:24:42.186 } 00:24:42.186 ] 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "subsystem": "vmd", 00:24:42.186 "config": [] 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "subsystem": "accel", 00:24:42.186 "config": [ 00:24:42.186 { 00:24:42.186 "method": "accel_set_options", 00:24:42.186 "params": { 00:24:42.186 "small_cache_size": 128, 00:24:42.186 "large_cache_size": 16, 00:24:42.186 "task_count": 2048, 00:24:42.186 "sequence_count": 2048, 00:24:42.186 "buf_count": 2048 00:24:42.186 } 00:24:42.186 } 00:24:42.186 ] 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "subsystem": "bdev", 00:24:42.186 "config": [ 00:24:42.186 { 00:24:42.186 "method": "bdev_set_options", 00:24:42.186 "params": { 00:24:42.186 "bdev_io_pool_size": 65535, 00:24:42.186 "bdev_io_cache_size": 256, 00:24:42.186 "bdev_auto_examine": true, 00:24:42.186 "iobuf_small_cache_size": 128, 00:24:42.186 "iobuf_large_cache_size": 16 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_raid_set_options", 00:24:42.186 "params": { 00:24:42.186 "process_window_size_kb": 1024, 00:24:42.186 "process_max_bandwidth_mb_sec": 0 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_iscsi_set_options", 00:24:42.186 "params": { 00:24:42.186 "timeout_sec": 30 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_nvme_set_options", 00:24:42.186 "params": { 00:24:42.186 "action_on_timeout": "none", 00:24:42.186 "timeout_us": 0, 00:24:42.186 "timeout_admin_us": 0, 00:24:42.186 "keep_alive_timeout_ms": 10000, 00:24:42.186 "arbitration_burst": 0, 00:24:42.186 "low_priority_weight": 0, 00:24:42.186 "medium_priority_weight": 0, 00:24:42.186 "high_priority_weight": 0, 00:24:42.186 "nvme_adminq_poll_period_us": 10000, 00:24:42.186 "nvme_ioq_poll_period_us": 0, 00:24:42.186 "io_queue_requests": 512, 00:24:42.186 "delay_cmd_submit": true, 00:24:42.186 "transport_retry_count": 4, 00:24:42.186 "bdev_retry_count": 3, 00:24:42.186 "transport_ack_timeout": 0, 00:24:42.186 "ctrlr_loss_timeout_sec": 0, 00:24:42.186 "reconnect_delay_sec": 0, 00:24:42.186 "fast_io_fail_timeout_sec": 0, 00:24:42.186 "disable_auto_failback": false, 00:24:42.186 "generate_uuids": false, 00:24:42.186 "transport_tos": 0, 00:24:42.186 "nvme_error_stat": false, 00:24:42.186 "rdma_srq_size": 0, 00:24:42.186 "io_path_stat": false, 00:24:42.186 "allow_accel_sequence": false, 00:24:42.186 "rdma_max_cq_size": 0, 00:24:42.186 "rdma_cm_event_timeout_ms": 0, 00:24:42.186 "dhchap_digests": [ 00:24:42.186 "sha256", 00:24:42.186 "sha384", 00:24:42.186 "sha512" 00:24:42.186 ], 00:24:42.186 "dhchap_dhgroups": [ 00:24:42.186 "null", 00:24:42.186 "ffdhe2048", 00:24:42.186 "ffdhe3072", 00:24:42.186 "ffdhe4096", 00:24:42.186 "ffdhe6144", 00:24:42.186 "ffdhe8192" 00:24:42.186 ] 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_nvme_attach_controller", 00:24:42.186 "params": { 00:24:42.186 "name": "nvme0", 00:24:42.186 "trtype": "TCP", 00:24:42.186 "adrfam": "IPv4", 00:24:42.186 "traddr": "10.0.0.2", 00:24:42.186 "trsvcid": "4420", 00:24:42.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.186 "prchk_reftag": false, 00:24:42.186 "prchk_guard": false, 00:24:42.186 "ctrlr_loss_timeout_sec": 0, 00:24:42.186 "reconnect_delay_sec": 0, 00:24:42.186 "fast_io_fail_timeout_sec": 0, 00:24:42.186 "psk": "key0", 00:24:42.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.186 "hdgst": false, 00:24:42.186 "ddgst": false, 00:24:42.186 "multipath": "multipath" 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_nvme_set_hotplug", 00:24:42.186 "params": { 00:24:42.186 "period_us": 100000, 00:24:42.186 "enable": false 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_enable_histogram", 00:24:42.186 "params": { 00:24:42.186 "name": "nvme0n1", 00:24:42.186 "enable": true 00:24:42.186 } 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "method": "bdev_wait_for_examine" 00:24:42.186 } 00:24:42.186 ] 00:24:42.186 }, 00:24:42.186 { 00:24:42.186 "subsystem": "nbd", 00:24:42.186 "config": [] 00:24:42.186 } 00:24:42.186 ] 00:24:42.186 }' 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 291460 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 291460 ']' 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 291460 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291460 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291460' 00:24:42.186 killing process with pid 291460 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 291460 00:24:42.186 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.186 00:24:42.186 Latency(us) 00:24:42.186 [2024-12-06T23:51:58.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.186 [2024-12-06T23:51:58.337Z] =================================================================================================================== 00:24:42.186 [2024-12-06T23:51:58.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.186 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 291460 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 291325 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 291325 ']' 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 291325 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291325 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291325' 00:24:42.443 killing process with pid 291325 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 291325 00:24:42.443 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 291325 00:24:42.700 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:42.700 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.700 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:42.700 "subsystems": [ 00:24:42.700 { 00:24:42.700 "subsystem": "keyring", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "keyring_file_add_key", 00:24:42.700 "params": { 00:24:42.700 "name": "key0", 00:24:42.700 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:42.700 } 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "iobuf", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "iobuf_set_options", 00:24:42.700 "params": { 00:24:42.700 "small_pool_count": 8192, 00:24:42.700 "large_pool_count": 1024, 00:24:42.700 "small_bufsize": 8192, 00:24:42.700 "large_bufsize": 135168, 00:24:42.700 "enable_numa": false 00:24:42.700 } 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "sock", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "sock_set_default_impl", 00:24:42.700 "params": { 00:24:42.700 "impl_name": "posix" 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "sock_impl_set_options", 00:24:42.700 "params": { 00:24:42.700 "impl_name": "ssl", 00:24:42.700 "recv_buf_size": 4096, 00:24:42.700 "send_buf_size": 4096, 00:24:42.700 "enable_recv_pipe": true, 00:24:42.700 "enable_quickack": false, 00:24:42.700 "enable_placement_id": 0, 00:24:42.700 "enable_zerocopy_send_server": true, 00:24:42.700 "enable_zerocopy_send_client": false, 00:24:42.700 "zerocopy_threshold": 0, 00:24:42.700 "tls_version": 0, 00:24:42.700 "enable_ktls": false 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "sock_impl_set_options", 00:24:42.700 "params": { 00:24:42.700 "impl_name": "posix", 00:24:42.700 "recv_buf_size": 2097152, 00:24:42.700 "send_buf_size": 2097152, 00:24:42.700 "enable_recv_pipe": true, 00:24:42.700 "enable_quickack": false, 00:24:42.700 "enable_placement_id": 0, 00:24:42.700 "enable_zerocopy_send_server": true, 00:24:42.700 "enable_zerocopy_send_client": false, 00:24:42.700 "zerocopy_threshold": 0, 00:24:42.700 "tls_version": 0, 00:24:42.700 "enable_ktls": false 00:24:42.700 } 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "vmd", 00:24:42.700 "config": [] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "accel", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "accel_set_options", 00:24:42.700 "params": { 00:24:42.700 "small_cache_size": 128, 00:24:42.700 "large_cache_size": 16, 00:24:42.700 "task_count": 2048, 00:24:42.700 "sequence_count": 2048, 00:24:42.700 "buf_count": 2048 00:24:42.700 } 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "bdev", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "bdev_set_options", 00:24:42.700 "params": { 00:24:42.700 "bdev_io_pool_size": 65535, 00:24:42.700 "bdev_io_cache_size": 256, 00:24:42.700 "bdev_auto_examine": true, 00:24:42.700 "iobuf_small_cache_size": 128, 00:24:42.700 "iobuf_large_cache_size": 16 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_raid_set_options", 00:24:42.700 "params": { 00:24:42.700 "process_window_size_kb": 1024, 00:24:42.700 "process_max_bandwidth_mb_sec": 0 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_iscsi_set_options", 00:24:42.700 "params": { 00:24:42.700 "timeout_sec": 30 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_nvme_set_options", 00:24:42.700 "params": { 00:24:42.700 "action_on_timeout": "none", 00:24:42.700 "timeout_us": 0, 00:24:42.700 "timeout_admin_us": 0, 00:24:42.700 "keep_alive_timeout_ms": 10000, 00:24:42.700 "arbitration_burst": 0, 00:24:42.700 "low_priority_weight": 0, 00:24:42.700 "medium_priority_weight": 0, 00:24:42.700 "high_priority_weight": 0, 00:24:42.700 "nvme_adminq_poll_period_us": 10000, 00:24:42.700 "nvme_ioq_poll_period_us": 0, 00:24:42.700 "io_queue_requests": 0, 00:24:42.700 "delay_cmd_submit": true, 00:24:42.700 "transport_retry_count": 4, 00:24:42.700 "bdev_retry_count": 3, 00:24:42.700 "transport_ack_timeout": 0, 00:24:42.700 "ctrlr_loss_timeout_sec": 0, 00:24:42.700 "reconnect_delay_sec": 0, 00:24:42.700 "fast_io_fail_timeout_sec": 0, 00:24:42.700 "disable_auto_failback": false, 00:24:42.700 "generate_uuids": false, 00:24:42.700 "transport_tos": 0, 00:24:42.700 "nvme_error_stat": false, 00:24:42.700 "rdma_srq_size": 0, 00:24:42.700 "io_path_stat": false, 00:24:42.700 "allow_accel_sequence": false, 00:24:42.700 "rdma_max_cq_size": 0, 00:24:42.700 "rdma_cm_event_timeout_ms": 0, 00:24:42.700 "dhchap_digests": [ 00:24:42.700 "sha256", 00:24:42.700 "sha384", 00:24:42.700 "sha512" 00:24:42.700 ], 00:24:42.700 "dhchap_dhgroups": [ 00:24:42.700 "null", 00:24:42.700 "ffdhe2048", 00:24:42.700 "ffdhe3072", 00:24:42.700 "ffdhe4096", 00:24:42.700 "ffdhe6144", 00:24:42.700 "ffdhe8192" 00:24:42.700 ] 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_nvme_set_hotplug", 00:24:42.700 "params": { 00:24:42.700 "period_us": 100000, 00:24:42.700 "enable": false 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_malloc_create", 00:24:42.700 "params": { 00:24:42.700 "name": "malloc0", 00:24:42.700 "num_blocks": 8192, 00:24:42.700 "block_size": 4096, 00:24:42.700 "physical_block_size": 4096, 00:24:42.700 "uuid": "7c101c85-760a-468c-aae2-750558b30f95", 00:24:42.700 "optimal_io_boundary": 0, 00:24:42.700 "md_size": 0, 00:24:42.700 "dif_type": 0, 00:24:42.700 "dif_is_head_of_md": false, 00:24:42.700 "dif_pi_format": 0 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "method": "bdev_wait_for_examine" 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "nbd", 00:24:42.700 "config": [] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "scheduler", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "framework_set_scheduler", 00:24:42.700 "params": { 00:24:42.700 "name": "static" 00:24:42.700 } 00:24:42.700 } 00:24:42.700 ] 00:24:42.700 }, 00:24:42.700 { 00:24:42.700 "subsystem": "nvmf", 00:24:42.700 "config": [ 00:24:42.700 { 00:24:42.700 "method": "nvmf_set_config", 00:24:42.700 "params": { 00:24:42.700 "discovery_filter": "match_any", 00:24:42.700 "admin_cmd_passthru": { 00:24:42.700 "identify_ctrlr": false 00:24:42.700 }, 00:24:42.700 "dhchap_digests": [ 00:24:42.700 "sha256", 00:24:42.700 "sha384", 00:24:42.700 "sha512" 00:24:42.700 ], 00:24:42.700 "dhchap_dhgroups": [ 00:24:42.700 "null", 00:24:42.700 "ffdhe2048", 00:24:42.700 "ffdhe3072", 00:24:42.700 "ffdhe4096", 00:24:42.700 "ffdhe6144", 00:24:42.700 "ffdhe8192" 00:24:42.700 ] 00:24:42.700 } 00:24:42.700 }, 00:24:42.700 { 00:24:42.701 "method": "nvmf_set_max_subsystems", 00:24:42.701 "params": { 00:24:42.701 "max_subsystems": 1024 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_set_crdt", 00:24:42.701 "params": { 00:24:42.701 "crdt1": 0, 00:24:42.701 "crdt2": 0, 00:24:42.701 "crdt3": 0 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_create_transport", 00:24:42.701 "params": { 00:24:42.701 "trtype": "TCP", 00:24:42.701 "max_queue_depth": 128, 00:24:42.701 "max_io_qpairs_per_ctrlr": 127, 00:24:42.701 "in_capsule_data_size": 4096, 00:24:42.701 "max_io_size": 131072, 00:24:42.701 "io_unit_size": 131072, 00:24:42.701 "max_aq_depth": 128, 00:24:42.701 "num_shared_buffers": 511, 00:24:42.701 "buf_cache_size": 4294967295, 00:24:42.701 "dif_insert_or_strip": false, 00:24:42.701 "zcopy": false, 00:24:42.701 "c2h_success": false, 00:24:42.701 "sock_priority": 0, 00:24:42.701 "abort_timeout_sec": 1, 00:24:42.701 "ack_timeout": 0, 00:24:42.701 "data_wr_pool_size": 0 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_create_subsystem", 00:24:42.701 "params": { 00:24:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.701 "allow_any_host": false, 00:24:42.701 "serial_number": "00000000000000000000", 00:24:42.701 "model_number": "SPDK bdev Controller", 00:24:42.701 "max_namespaces": 32, 00:24:42.701 "min_cntlid": 1, 00:24:42.701 "max_cntlid": 65519, 00:24:42.701 "ana_reporting": false 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_subsystem_add_host", 00:24:42.701 "params": { 00:24:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.701 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.701 "psk": "key0" 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_subsystem_add_ns", 00:24:42.701 "params": { 00:24:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.701 "namespace": { 00:24:42.701 "nsid": 1, 00:24:42.701 "bdev_name": "malloc0", 00:24:42.701 "nguid": "7C101C85760A468CAAE2750558B30F95", 00:24:42.701 "uuid": "7c101c85-760a-468c-aae2-750558b30f95", 00:24:42.701 "no_auto_visible": false 00:24:42.701 } 00:24:42.701 } 00:24:42.701 }, 00:24:42.701 { 00:24:42.701 "method": "nvmf_subsystem_add_listener", 00:24:42.701 "params": { 00:24:42.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.701 "listen_address": { 00:24:42.701 "trtype": "TCP", 00:24:42.701 "adrfam": "IPv4", 00:24:42.701 "traddr": "10.0.0.2", 00:24:42.701 "trsvcid": "4420" 00:24:42.701 }, 00:24:42.701 "secure_channel": false, 00:24:42.701 "sock_impl": "ssl" 00:24:42.701 } 00:24:42.701 } 00:24:42.701 ] 00:24:42.701 } 00:24:42.701 ] 00:24:42.701 }' 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=291756 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 291756 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 291756 ']' 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.701 00:51:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.701 [2024-12-07 00:51:58.755039] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:42.701 [2024-12-07 00:51:58.755151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.701 [2024-12-07 00:51:58.830056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.958 [2024-12-07 00:51:58.876120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.958 [2024-12-07 00:51:58.876175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.958 [2024-12-07 00:51:58.876189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.958 [2024-12-07 00:51:58.876200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.958 [2024-12-07 00:51:58.876210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.958 [2024-12-07 00:51:58.876797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.215 [2024-12-07 00:51:59.116651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.215 [2024-12-07 00:51:59.148685] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.215 [2024-12-07 00:51:59.148921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=291909 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 291909 /var/tmp/bdevperf.sock 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 291909 ']' 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.780 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:43.780 "subsystems": [ 00:24:43.780 { 00:24:43.780 "subsystem": "keyring", 00:24:43.780 "config": [ 00:24:43.780 { 00:24:43.780 "method": "keyring_file_add_key", 00:24:43.780 "params": { 00:24:43.780 "name": "key0", 00:24:43.780 "path": "/tmp/tmp.q1AZ7btZQL" 00:24:43.780 } 00:24:43.780 } 00:24:43.780 ] 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "subsystem": "iobuf", 00:24:43.780 "config": [ 00:24:43.780 { 00:24:43.780 "method": "iobuf_set_options", 00:24:43.780 "params": { 00:24:43.780 "small_pool_count": 8192, 00:24:43.780 "large_pool_count": 1024, 00:24:43.780 "small_bufsize": 8192, 00:24:43.780 "large_bufsize": 135168, 00:24:43.780 "enable_numa": false 00:24:43.780 } 00:24:43.780 } 00:24:43.780 ] 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "subsystem": "sock", 00:24:43.780 "config": [ 00:24:43.780 { 00:24:43.780 "method": "sock_set_default_impl", 00:24:43.780 "params": { 00:24:43.780 "impl_name": "posix" 00:24:43.780 } 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "method": "sock_impl_set_options", 00:24:43.780 "params": { 00:24:43.780 "impl_name": "ssl", 00:24:43.780 "recv_buf_size": 4096, 00:24:43.780 "send_buf_size": 4096, 00:24:43.780 "enable_recv_pipe": true, 00:24:43.780 "enable_quickack": false, 00:24:43.780 "enable_placement_id": 0, 00:24:43.780 "enable_zerocopy_send_server": true, 00:24:43.780 "enable_zerocopy_send_client": false, 00:24:43.780 "zerocopy_threshold": 0, 00:24:43.780 "tls_version": 0, 00:24:43.780 "enable_ktls": false 00:24:43.780 } 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "method": "sock_impl_set_options", 00:24:43.780 "params": { 00:24:43.780 "impl_name": "posix", 00:24:43.780 "recv_buf_size": 2097152, 00:24:43.780 "send_buf_size": 2097152, 00:24:43.780 "enable_recv_pipe": true, 00:24:43.780 "enable_quickack": false, 00:24:43.780 "enable_placement_id": 0, 00:24:43.780 "enable_zerocopy_send_server": true, 00:24:43.780 "enable_zerocopy_send_client": false, 00:24:43.780 "zerocopy_threshold": 0, 00:24:43.780 "tls_version": 0, 00:24:43.780 "enable_ktls": false 00:24:43.780 } 00:24:43.780 } 00:24:43.780 ] 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "subsystem": "vmd", 00:24:43.780 "config": [] 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "subsystem": "accel", 00:24:43.780 "config": [ 00:24:43.780 { 00:24:43.780 "method": "accel_set_options", 00:24:43.780 "params": { 00:24:43.780 "small_cache_size": 128, 00:24:43.780 "large_cache_size": 16, 00:24:43.780 "task_count": 2048, 00:24:43.780 "sequence_count": 2048, 00:24:43.780 "buf_count": 2048 00:24:43.780 } 00:24:43.780 } 00:24:43.780 ] 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "subsystem": "bdev", 00:24:43.780 "config": [ 00:24:43.780 { 00:24:43.780 "method": "bdev_set_options", 00:24:43.780 "params": { 00:24:43.780 "bdev_io_pool_size": 65535, 00:24:43.780 "bdev_io_cache_size": 256, 00:24:43.780 "bdev_auto_examine": true, 00:24:43.780 "iobuf_small_cache_size": 128, 00:24:43.780 "iobuf_large_cache_size": 16 00:24:43.780 } 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "method": "bdev_raid_set_options", 00:24:43.780 "params": { 00:24:43.780 "process_window_size_kb": 1024, 00:24:43.780 "process_max_bandwidth_mb_sec": 0 00:24:43.780 } 00:24:43.780 }, 00:24:43.780 { 00:24:43.780 "method": "bdev_iscsi_set_options", 00:24:43.780 "params": { 00:24:43.781 "timeout_sec": 30 00:24:43.781 } 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "method": "bdev_nvme_set_options", 00:24:43.781 "params": { 00:24:43.781 "action_on_timeout": "none", 00:24:43.781 "timeout_us": 0, 00:24:43.781 "timeout_admin_us": 0, 00:24:43.781 "keep_alive_timeout_ms": 10000, 00:24:43.781 "arbitration_burst": 0, 00:24:43.781 "low_priority_weight": 0, 00:24:43.781 "medium_priority_weight": 0, 00:24:43.781 "high_priority_weight": 0, 00:24:43.781 "nvme_adminq_poll_period_us": 10000, 00:24:43.781 "nvme_ioq_poll_period_us": 0, 00:24:43.781 "io_queue_requests": 512, 00:24:43.781 "delay_cmd_submit": true, 00:24:43.781 "transport_retry_count": 4, 00:24:43.781 "bdev_retry_count": 3, 00:24:43.781 "transport_ack_timeout": 0, 00:24:43.781 "ctrlr_loss_timeout_sec": 0, 00:24:43.781 "reconnect_delay_sec": 0, 00:24:43.781 "fast_io_fail_timeout_sec": 0, 00:24:43.781 "disable_auto_failback": false, 00:24:43.781 "generate_uuids": false, 00:24:43.781 "transport_tos": 0, 00:24:43.781 "nvme_error_stat": false, 00:24:43.781 "rdma_srq_size": 0, 00:24:43.781 "io_path_stat": false, 00:24:43.781 "allow_accel_sequence": false, 00:24:43.781 "rdma_max_cq_size": 0, 00:24:43.781 "rdma_cm_event_timeout_ms": 0, 00:24:43.781 "dhchap_digests": [ 00:24:43.781 "sha256", 00:24:43.781 "sha384", 00:24:43.781 "sha512" 00:24:43.781 ], 00:24:43.781 "dhchap_dhgroups": [ 00:24:43.781 "null", 00:24:43.781 "ffdhe2048", 00:24:43.781 "ffdhe3072", 00:24:43.781 "ffdhe4096", 00:24:43.781 "ffdhe6144", 00:24:43.781 "ffdhe8192" 00:24:43.781 ] 00:24:43.781 } 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "method": "bdev_nvme_attach_controller", 00:24:43.781 "params": { 00:24:43.781 "name": "nvme0", 00:24:43.781 "trtype": "TCP", 00:24:43.781 "adrfam": "IPv4", 00:24:43.781 "traddr": "10.0.0.2", 00:24:43.781 "trsvcid": "4420", 00:24:43.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.781 "prchk_reftag": false, 00:24:43.781 "prchk_guard": false, 00:24:43.781 "ctrlr_loss_timeout_sec": 0, 00:24:43.781 "reconnect_delay_sec": 0, 00:24:43.781 "fast_io_fail_timeout_sec": 0, 00:24:43.781 "psk": "key0", 00:24:43.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.781 "hdgst": false, 00:24:43.781 "ddgst": false, 00:24:43.781 "multipath": "multipath" 00:24:43.781 } 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "method": "bdev_nvme_set_hotplug", 00:24:43.781 "params": { 00:24:43.781 "period_us": 100000, 00:24:43.781 "enable": false 00:24:43.781 } 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "method": "bdev_enable_histogram", 00:24:43.781 "params": { 00:24:43.781 "name": "nvme0n1", 00:24:43.781 "enable": true 00:24:43.781 } 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "method": "bdev_wait_for_examine" 00:24:43.781 } 00:24:43.781 ] 00:24:43.781 }, 00:24:43.781 { 00:24:43.781 "subsystem": "nbd", 00:24:43.781 "config": [] 00:24:43.781 } 00:24:43.781 ] 00:24:43.781 }' 00:24:43.781 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.781 00:51:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.781 [2024-12-07 00:51:59.820968] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:43.781 [2024-12-07 00:51:59.821083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291909 ] 00:24:43.781 [2024-12-07 00:51:59.891293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.037 [2024-12-07 00:51:59.940003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.037 [2024-12-07 00:52:00.123454] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.294 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.294 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:44.294 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.294 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:44.569 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.569 00:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.569 Running I/O for 1 seconds... 00:24:45.758 3208.00 IOPS, 12.53 MiB/s 00:24:45.758 Latency(us) 00:24:45.758 [2024-12-06T23:52:01.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:45.758 Verification LBA range: start 0x0 length 0x2000 00:24:45.758 nvme0n1 : 1.04 3213.43 12.55 0.00 0.00 39248.29 8446.86 37282.70 00:24:45.758 [2024-12-06T23:52:01.909Z] =================================================================================================================== 00:24:45.758 [2024-12-06T23:52:01.909Z] Total : 3213.43 12.55 0.00 0.00 39248.29 8446.86 37282.70 00:24:45.758 { 00:24:45.758 "results": [ 00:24:45.758 { 00:24:45.758 "job": "nvme0n1", 00:24:45.758 "core_mask": "0x2", 00:24:45.758 "workload": "verify", 00:24:45.758 "status": "finished", 00:24:45.758 "verify_range": { 00:24:45.758 "start": 0, 00:24:45.758 "length": 8192 00:24:45.758 }, 00:24:45.758 "queue_depth": 128, 00:24:45.758 "io_size": 4096, 00:24:45.758 "runtime": 1.038143, 00:24:45.758 "iops": 3213.430134384184, 00:24:45.758 "mibps": 12.552461462438218, 00:24:45.758 "io_failed": 0, 00:24:45.758 "io_timeout": 0, 00:24:45.758 "avg_latency_us": 39248.294733102404, 00:24:45.758 "min_latency_us": 8446.862222222222, 00:24:45.758 "max_latency_us": 37282.70222222222 00:24:45.758 } 00:24:45.758 ], 00:24:45.758 "core_count": 1 00:24:45.758 } 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:45.758 nvmf_trace.0 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 291909 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 291909 ']' 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 291909 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291909 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291909' 00:24:45.758 killing process with pid 291909 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 291909 00:24:45.758 Received shutdown signal, test time was about 1.000000 seconds 00:24:45.758 00:24:45.758 Latency(us) 00:24:45.758 [2024-12-06T23:52:01.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.758 [2024-12-06T23:52:01.909Z] =================================================================================================================== 00:24:45.758 [2024-12-06T23:52:01.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.758 00:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 291909 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.016 rmmod nvme_tcp 00:24:46.016 rmmod nvme_fabrics 00:24:46.016 rmmod nvme_keyring 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 291756 ']' 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 291756 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 291756 ']' 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 291756 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291756 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291756' 00:24:46.016 killing process with pid 291756 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 291756 00:24:46.016 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 291756 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.276 00:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QCi7ZY7E11 /tmp/tmp.8GyiEepOOA /tmp/tmp.q1AZ7btZQL 00:24:48.815 00:24:48.815 real 1m22.521s 00:24:48.815 user 2m20.077s 00:24:48.815 sys 0m23.817s 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.815 ************************************ 00:24:48.815 END TEST nvmf_tls 00:24:48.815 ************************************ 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:48.815 ************************************ 00:24:48.815 START TEST nvmf_fips 00:24:48.815 ************************************ 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:48.815 * Looking for test storage... 00:24:48.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.815 --rc genhtml_branch_coverage=1 00:24:48.815 --rc genhtml_function_coverage=1 00:24:48.815 --rc genhtml_legend=1 00:24:48.815 --rc geninfo_all_blocks=1 00:24:48.815 --rc geninfo_unexecuted_blocks=1 00:24:48.815 00:24:48.815 ' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.815 --rc genhtml_branch_coverage=1 00:24:48.815 --rc genhtml_function_coverage=1 00:24:48.815 --rc genhtml_legend=1 00:24:48.815 --rc geninfo_all_blocks=1 00:24:48.815 --rc geninfo_unexecuted_blocks=1 00:24:48.815 00:24:48.815 ' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.815 --rc genhtml_branch_coverage=1 00:24:48.815 --rc genhtml_function_coverage=1 00:24:48.815 --rc genhtml_legend=1 00:24:48.815 --rc geninfo_all_blocks=1 00:24:48.815 --rc geninfo_unexecuted_blocks=1 00:24:48.815 00:24:48.815 ' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.815 --rc genhtml_branch_coverage=1 00:24:48.815 --rc genhtml_function_coverage=1 00:24:48.815 --rc genhtml_legend=1 00:24:48.815 --rc geninfo_all_blocks=1 00:24:48.815 --rc geninfo_unexecuted_blocks=1 00:24:48.815 00:24:48.815 ' 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.815 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:48.816 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:48.817 Error setting digest 00:24:48.817 4002D68D907F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:48.817 4002D68D907F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.817 00:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.348 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:51.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:51.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:51.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:51.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.349 00:52:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:24:51.349 00:24:51.349 --- 10.0.0.2 ping statistics --- 00:24:51.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.349 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:24:51.349 00:24:51.349 --- 10.0.0.1 ping statistics --- 00:24:51.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.349 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.349 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=294266 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 294266 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 294266 ']' 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 [2024-12-07 00:52:07.145301] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:51.350 [2024-12-07 00:52:07.145381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.350 [2024-12-07 00:52:07.216240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.350 [2024-12-07 00:52:07.258765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.350 [2024-12-07 00:52:07.258823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.350 [2024-12-07 00:52:07.258851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.350 [2024-12-07 00:52:07.258862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.350 [2024-12-07 00:52:07.258871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.350 [2024-12-07 00:52:07.259471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9vt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9vt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9vt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9vt 00:24:51.350 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.608 [2024-12-07 00:52:07.637647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.609 [2024-12-07 00:52:07.653657] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.609 [2024-12-07 00:52:07.653866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.609 malloc0 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=294295 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 294295 /var/tmp/bdevperf.sock 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 294295 ']' 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.609 00:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:51.867 [2024-12-07 00:52:07.785519] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:51.867 [2024-12-07 00:52:07.785607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294295 ] 00:24:51.867 [2024-12-07 00:52:07.854362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.867 [2024-12-07 00:52:07.900915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.867 00:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.867 00:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:51.867 00:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9vt 00:24:52.125 00:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:52.383 [2024-12-07 00:52:08.517456] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.641 TLSTESTn1 00:24:52.641 00:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.641 Running I/O for 10 seconds... 00:24:54.946 3145.00 IOPS, 12.29 MiB/s [2024-12-06T23:52:12.029Z] 3190.00 IOPS, 12.46 MiB/s [2024-12-06T23:52:12.961Z] 3218.33 IOPS, 12.57 MiB/s [2024-12-06T23:52:13.912Z] 3243.00 IOPS, 12.67 MiB/s [2024-12-06T23:52:14.970Z] 3268.60 IOPS, 12.77 MiB/s [2024-12-06T23:52:16.077Z] 3286.33 IOPS, 12.84 MiB/s [2024-12-06T23:52:17.007Z] 3302.86 IOPS, 12.90 MiB/s [2024-12-06T23:52:17.939Z] 3301.62 IOPS, 12.90 MiB/s [2024-12-06T23:52:18.871Z] 3251.44 IOPS, 12.70 MiB/s [2024-12-06T23:52:18.871Z] 3265.40 IOPS, 12.76 MiB/s 00:25:02.720 Latency(us) 00:25:02.720 [2024-12-06T23:52:18.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.720 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.720 Verification LBA range: start 0x0 length 0x2000 00:25:02.720 TLSTESTn1 : 10.03 3268.78 12.77 0.00 0.00 39086.46 10340.12 40583.77 00:25:02.720 [2024-12-06T23:52:18.871Z] =================================================================================================================== 00:25:02.720 [2024-12-06T23:52:18.871Z] Total : 3268.78 12.77 0.00 0.00 39086.46 10340.12 40583.77 00:25:02.720 { 00:25:02.720 "results": [ 00:25:02.720 { 00:25:02.720 "job": "TLSTESTn1", 00:25:02.720 "core_mask": "0x4", 00:25:02.720 "workload": "verify", 00:25:02.720 "status": "finished", 00:25:02.720 "verify_range": { 00:25:02.720 "start": 0, 00:25:02.720 "length": 8192 00:25:02.720 }, 00:25:02.720 "queue_depth": 128, 00:25:02.720 "io_size": 4096, 00:25:02.720 "runtime": 10.028513, 00:25:02.720 "iops": 3268.7797283605255, 00:25:02.720 "mibps": 12.768670813908303, 00:25:02.720 "io_failed": 0, 00:25:02.720 "io_timeout": 0, 00:25:02.720 "avg_latency_us": 39086.46365566323, 00:25:02.720 "min_latency_us": 10340.124444444444, 00:25:02.720 "max_latency_us": 40583.77481481482 00:25:02.720 } 00:25:02.720 ], 00:25:02.720 "core_count": 1 00:25:02.720 } 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:02.720 nvmf_trace.0 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 294295 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 294295 ']' 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 294295 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.720 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294295 00:25:02.978 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:02.978 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:02.978 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294295' 00:25:02.978 killing process with pid 294295 00:25:02.978 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 294295 00:25:02.978 Received shutdown signal, test time was about 10.000000 seconds 00:25:02.978 00:25:02.978 Latency(us) 00:25:02.978 [2024-12-06T23:52:19.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.978 [2024-12-06T23:52:19.129Z] =================================================================================================================== 00:25:02.978 [2024-12-06T23:52:19.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.978 00:52:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 294295 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.978 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.978 rmmod nvme_tcp 00:25:02.978 rmmod nvme_fabrics 00:25:03.236 rmmod nvme_keyring 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 294266 ']' 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 294266 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 294266 ']' 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 294266 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294266 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294266' 00:25:03.236 killing process with pid 294266 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 294266 00:25:03.236 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 294266 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.494 00:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.399 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.399 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9vt 00:25:05.399 00:25:05.400 real 0m17.060s 00:25:05.400 user 0m22.668s 00:25:05.400 sys 0m5.284s 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:05.400 ************************************ 00:25:05.400 END TEST nvmf_fips 00:25:05.400 ************************************ 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.400 ************************************ 00:25:05.400 START TEST nvmf_control_msg_list 00:25:05.400 ************************************ 00:25:05.400 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:05.659 * Looking for test storage... 00:25:05.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:05.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.659 --rc genhtml_branch_coverage=1 00:25:05.659 --rc genhtml_function_coverage=1 00:25:05.659 --rc genhtml_legend=1 00:25:05.659 --rc geninfo_all_blocks=1 00:25:05.659 --rc geninfo_unexecuted_blocks=1 00:25:05.659 00:25:05.659 ' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:05.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.659 --rc genhtml_branch_coverage=1 00:25:05.659 --rc genhtml_function_coverage=1 00:25:05.659 --rc genhtml_legend=1 00:25:05.659 --rc geninfo_all_blocks=1 00:25:05.659 --rc geninfo_unexecuted_blocks=1 00:25:05.659 00:25:05.659 ' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:05.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.659 --rc genhtml_branch_coverage=1 00:25:05.659 --rc genhtml_function_coverage=1 00:25:05.659 --rc genhtml_legend=1 00:25:05.659 --rc geninfo_all_blocks=1 00:25:05.659 --rc geninfo_unexecuted_blocks=1 00:25:05.659 00:25:05.659 ' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:05.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.659 --rc genhtml_branch_coverage=1 00:25:05.659 --rc genhtml_function_coverage=1 00:25:05.659 --rc genhtml_legend=1 00:25:05.659 --rc geninfo_all_blocks=1 00:25:05.659 --rc geninfo_unexecuted_blocks=1 00:25:05.659 00:25:05.659 ' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.659 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.660 00:52:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.201 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:08.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:08.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:08.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:08.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:25:08.202 00:25:08.202 --- 10.0.0.2 ping statistics --- 00:25:08.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.202 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:25:08.202 00:25:08.202 --- 10.0.0.1 ping statistics --- 00:25:08.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.202 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=297689 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 297689 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 297689 ']' 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.202 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.203 00:52:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 [2024-12-07 00:52:23.948559] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:08.203 [2024-12-07 00:52:23.948654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.203 [2024-12-07 00:52:24.022837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.203 [2024-12-07 00:52:24.070128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.203 [2024-12-07 00:52:24.070185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.203 [2024-12-07 00:52:24.070214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.203 [2024-12-07 00:52:24.070224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.203 [2024-12-07 00:52:24.070234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.203 [2024-12-07 00:52:24.070821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 [2024-12-07 00:52:24.218440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 Malloc0 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 [2024-12-07 00:52:24.258312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=297715 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=297716 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=297717 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 297715 00:25:08.203 00:52:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.203 [2024-12-07 00:52:24.316798] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.203 [2024-12-07 00:52:24.326839] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.203 [2024-12-07 00:52:24.327060] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:09.580 Initializing NVMe Controllers 00:25:09.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:09.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:09.580 Initialization complete. Launching workers. 00:25:09.580 ======================================================== 00:25:09.580 Latency(us) 00:25:09.580 Device Information : IOPS MiB/s Average min max 00:25:09.580 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2867.00 11.20 348.30 223.18 41189.87 00:25:09.580 ======================================================== 00:25:09.580 Total : 2867.00 11.20 348.30 223.18 41189.87 00:25:09.580 00:25:09.580 Initializing NVMe Controllers 00:25:09.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:09.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:09.580 Initialization complete. Launching workers. 00:25:09.580 ======================================================== 00:25:09.580 Latency(us) 00:25:09.580 Device Information : IOPS MiB/s Average min max 00:25:09.580 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3652.00 14.27 273.40 158.32 603.15 00:25:09.580 ======================================================== 00:25:09.580 Total : 3652.00 14.27 273.40 158.32 603.15 00:25:09.580 00:25:09.580 Initializing NVMe Controllers 00:25:09.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:09.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:09.580 Initialization complete. Launching workers. 00:25:09.580 ======================================================== 00:25:09.581 Latency(us) 00:25:09.581 Device Information : IOPS MiB/s Average min max 00:25:09.581 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3709.00 14.49 269.23 151.51 496.08 00:25:09.581 ======================================================== 00:25:09.581 Total : 3709.00 14.49 269.23 151.51 496.08 00:25:09.581 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 297716 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 297717 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.581 rmmod nvme_tcp 00:25:09.581 rmmod nvme_fabrics 00:25:09.581 rmmod nvme_keyring 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 297689 ']' 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 297689 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 297689 ']' 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 297689 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297689 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297689' 00:25:09.581 killing process with pid 297689 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 297689 00:25:09.581 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 297689 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.839 00:52:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.383 00:25:12.383 real 0m6.422s 00:25:12.383 user 0m5.601s 00:25:12.383 sys 0m2.829s 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:12.383 ************************************ 00:25:12.383 END TEST nvmf_control_msg_list 00:25:12.383 ************************************ 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:12.383 ************************************ 00:25:12.383 START TEST nvmf_wait_for_buf 00:25:12.383 ************************************ 00:25:12.383 00:52:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:12.383 * Looking for test storage... 00:25:12.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:12.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.383 --rc genhtml_branch_coverage=1 00:25:12.383 --rc genhtml_function_coverage=1 00:25:12.383 --rc genhtml_legend=1 00:25:12.383 --rc geninfo_all_blocks=1 00:25:12.383 --rc geninfo_unexecuted_blocks=1 00:25:12.383 00:25:12.383 ' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:12.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.383 --rc genhtml_branch_coverage=1 00:25:12.383 --rc genhtml_function_coverage=1 00:25:12.383 --rc genhtml_legend=1 00:25:12.383 --rc geninfo_all_blocks=1 00:25:12.383 --rc geninfo_unexecuted_blocks=1 00:25:12.383 00:25:12.383 ' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:12.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.383 --rc genhtml_branch_coverage=1 00:25:12.383 --rc genhtml_function_coverage=1 00:25:12.383 --rc genhtml_legend=1 00:25:12.383 --rc geninfo_all_blocks=1 00:25:12.383 --rc geninfo_unexecuted_blocks=1 00:25:12.383 00:25:12.383 ' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:12.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.383 --rc genhtml_branch_coverage=1 00:25:12.383 --rc genhtml_function_coverage=1 00:25:12.383 --rc genhtml_legend=1 00:25:12.383 --rc geninfo_all_blocks=1 00:25:12.383 --rc geninfo_unexecuted_blocks=1 00:25:12.383 00:25:12.383 ' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.383 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.384 00:52:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:14.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.286 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:14.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:14.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:14.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.287 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:25:14.546 00:25:14.546 --- 10.0.0.2 ping statistics --- 00:25:14.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.546 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:25:14.546 00:25:14.546 --- 10.0.0.1 ping statistics --- 00:25:14.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.546 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=299886 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 299886 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 299886 ']' 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.546 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.546 [2024-12-07 00:52:30.664035] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:14.546 [2024-12-07 00:52:30.664121] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.805 [2024-12-07 00:52:30.738844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.805 [2024-12-07 00:52:30.781542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.805 [2024-12-07 00:52:30.781604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.805 [2024-12-07 00:52:30.781633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.805 [2024-12-07 00:52:30.781643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.805 [2024-12-07 00:52:30.781654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.805 [2024-12-07 00:52:30.782244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.805 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:15.065 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.065 00:52:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 Malloc0 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 [2024-12-07 00:52:31.017914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.065 [2024-12-07 00:52:31.042136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.065 00:52:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.065 [2024-12-07 00:52:31.131104] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:16.967 Initializing NVMe Controllers 00:25:16.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:16.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:16.967 Initialization complete. Launching workers. 00:25:16.967 ======================================================== 00:25:16.967 Latency(us) 00:25:16.967 Device Information : IOPS MiB/s Average min max 00:25:16.967 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.88 35006.15 8020.26 71840.98 00:25:16.967 ======================================================== 00:25:16.967 Total : 119.00 14.88 35006.15 8020.26 71840.98 00:25:16.967 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.967 rmmod nvme_tcp 00:25:16.967 rmmod nvme_fabrics 00:25:16.967 rmmod nvme_keyring 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 299886 ']' 00:25:16.967 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 299886 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 299886 ']' 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 299886 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 299886 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 299886' 00:25:16.968 killing process with pid 299886 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 299886 00:25:16.968 00:52:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 299886 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.968 00:52:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.511 00:25:19.511 real 0m7.073s 00:25:19.511 user 0m3.279s 00:25:19.511 sys 0m2.133s 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.511 ************************************ 00:25:19.511 END TEST nvmf_wait_for_buf 00:25:19.511 ************************************ 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:19.511 ************************************ 00:25:19.511 START TEST nvmf_fuzz 00:25:19.511 ************************************ 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:19.511 * Looking for test storage... 00:25:19.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.511 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:19.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.512 --rc genhtml_branch_coverage=1 00:25:19.512 --rc genhtml_function_coverage=1 00:25:19.512 --rc genhtml_legend=1 00:25:19.512 --rc geninfo_all_blocks=1 00:25:19.512 --rc geninfo_unexecuted_blocks=1 00:25:19.512 00:25:19.512 ' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:19.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.512 --rc genhtml_branch_coverage=1 00:25:19.512 --rc genhtml_function_coverage=1 00:25:19.512 --rc genhtml_legend=1 00:25:19.512 --rc geninfo_all_blocks=1 00:25:19.512 --rc geninfo_unexecuted_blocks=1 00:25:19.512 00:25:19.512 ' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:19.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.512 --rc genhtml_branch_coverage=1 00:25:19.512 --rc genhtml_function_coverage=1 00:25:19.512 --rc genhtml_legend=1 00:25:19.512 --rc geninfo_all_blocks=1 00:25:19.512 --rc geninfo_unexecuted_blocks=1 00:25:19.512 00:25:19.512 ' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:19.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.512 --rc genhtml_branch_coverage=1 00:25:19.512 --rc genhtml_function_coverage=1 00:25:19.512 --rc genhtml_legend=1 00:25:19.512 --rc geninfo_all_blocks=1 00:25:19.512 --rc geninfo_unexecuted_blocks=1 00:25:19.512 00:25:19.512 ' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.512 00:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.413 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:25:21.414 00:25:21.414 --- 10.0.0.2 ping statistics --- 00:25:21.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.414 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:21.414 00:25:21.414 --- 10.0.0.1 ping statistics --- 00:25:21.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.414 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=302118 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 302118 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 302118 ']' 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.414 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.673 Malloc0 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.673 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:21.933 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.933 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:21.933 00:52:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:54.034 Fuzzing completed. Shutting down the fuzz application 00:25:54.034 00:25:54.034 Dumping successful admin opcodes: 00:25:54.034 9, 10, 00:25:54.034 Dumping successful io opcodes: 00:25:54.034 0, 9, 00:25:54.034 NS: 0x2000008eff00 I/O qp, Total commands completed: 495161, total successful commands: 2851, random_seed: 1867368064 00:25:54.034 NS: 0x2000008eff00 admin qp, Total commands completed: 60208, total successful commands: 15, random_seed: 3306428288 00:25:54.034 00:53:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:54.034 Fuzzing completed. Shutting down the fuzz application 00:25:54.034 00:25:54.034 Dumping successful admin opcodes: 00:25:54.034 00:25:54.034 Dumping successful io opcodes: 00:25:54.034 00:25:54.034 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1898879565 00:25:54.034 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1899012660 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.034 rmmod nvme_tcp 00:25:54.034 rmmod nvme_fabrics 00:25:54.034 rmmod nvme_keyring 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 302118 ']' 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 302118 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 302118 ']' 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 302118 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302118 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302118' 00:25:54.034 killing process with pid 302118 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 302118 00:25:54.034 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 302118 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.035 00:53:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.934 00:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.935 00:53:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:55.935 00:25:55.935 real 0m36.905s 00:25:55.935 user 0m50.973s 00:25:55.935 sys 0m14.694s 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:55.935 ************************************ 00:25:55.935 END TEST nvmf_fuzz 00:25:55.935 ************************************ 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:55.935 ************************************ 00:25:55.935 START TEST nvmf_multiconnection 00:25:55.935 ************************************ 00:25:55.935 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:56.194 * Looking for test storage... 00:25:56.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.194 --rc genhtml_branch_coverage=1 00:25:56.194 --rc genhtml_function_coverage=1 00:25:56.194 --rc genhtml_legend=1 00:25:56.194 --rc geninfo_all_blocks=1 00:25:56.194 --rc geninfo_unexecuted_blocks=1 00:25:56.194 00:25:56.194 ' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.194 --rc genhtml_branch_coverage=1 00:25:56.194 --rc genhtml_function_coverage=1 00:25:56.194 --rc genhtml_legend=1 00:25:56.194 --rc geninfo_all_blocks=1 00:25:56.194 --rc geninfo_unexecuted_blocks=1 00:25:56.194 00:25:56.194 ' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.194 --rc genhtml_branch_coverage=1 00:25:56.194 --rc genhtml_function_coverage=1 00:25:56.194 --rc genhtml_legend=1 00:25:56.194 --rc geninfo_all_blocks=1 00:25:56.194 --rc geninfo_unexecuted_blocks=1 00:25:56.194 00:25:56.194 ' 00:25:56.194 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:56.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.195 --rc genhtml_branch_coverage=1 00:25:56.195 --rc genhtml_function_coverage=1 00:25:56.195 --rc genhtml_legend=1 00:25:56.195 --rc geninfo_all_blocks=1 00:25:56.195 --rc geninfo_unexecuted_blocks=1 00:25:56.195 00:25:56.195 ' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.195 00:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:58.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:58.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:58.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:58.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:25:58.731 00:25:58.731 --- 10.0.0.2 ping statistics --- 00:25:58.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.731 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:25:58.731 00:25:58.731 --- 10.0.0.1 ping statistics --- 00:25:58.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.731 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=307739 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.731 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 307739 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 307739 ']' 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.732 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 [2024-12-07 00:53:14.668570] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:58.732 [2024-12-07 00:53:14.668640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.732 [2024-12-07 00:53:14.740705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.732 [2024-12-07 00:53:14.787822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.732 [2024-12-07 00:53:14.787887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.732 [2024-12-07 00:53:14.787911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.732 [2024-12-07 00:53:14.787922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.732 [2024-12-07 00:53:14.787931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.732 [2024-12-07 00:53:14.789550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.732 [2024-12-07 00:53:14.789615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.732 [2024-12-07 00:53:14.789681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.732 [2024-12-07 00:53:14.789684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 [2024-12-07 00:53:14.938448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 Malloc1 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 [2024-12-07 00:53:15.002752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 Malloc2 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 Malloc3 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 Malloc4 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.990 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.247 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 Malloc5 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 Malloc6 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 Malloc7 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 Malloc8 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 Malloc9 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.248 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 Malloc10 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 Malloc11 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.507 00:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:00.071 00:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:00.071 00:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:00.071 00:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.072 00:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:00.072 00:53:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.978 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:02.916 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:02.916 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.916 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.916 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.916 00:53:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.824 00:53:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:05.391 00:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:05.391 00:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.391 00:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.391 00:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.391 00:53:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.297 00:53:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:08.235 00:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:08.235 00:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.235 00:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.235 00:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.235 00:53:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.152 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:10.722 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:10.722 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:10.722 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.722 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:10.722 00:53:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.258 00:53:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:13.828 00:53:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:13.828 00:53:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:13.828 00:53:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.828 00:53:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:13.828 00:53:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.730 00:53:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:16.295 00:53:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:16.295 00:53:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:16.295 00:53:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.295 00:53:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:16.295 00:53:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.824 00:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:19.400 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:19.400 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.400 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.400 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.400 00:53:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.297 00:53:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:22.238 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:22.238 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:22.238 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.238 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:22.238 00:53:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.147 00:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:25.087 00:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:25.087 00:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:25.087 00:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.087 00:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:25.087 00:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.991 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:27.930 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:27.930 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:27.930 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.930 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:27.930 00:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:29.837 00:53:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:29.837 [global] 00:26:29.837 thread=1 00:26:29.837 invalidate=1 00:26:29.837 rw=read 00:26:29.837 time_based=1 00:26:29.837 runtime=10 00:26:29.837 ioengine=libaio 00:26:29.837 direct=1 00:26:29.837 bs=262144 00:26:29.837 iodepth=64 00:26:29.837 norandommap=1 00:26:29.837 numjobs=1 00:26:29.837 00:26:29.837 [job0] 00:26:29.837 filename=/dev/nvme0n1 00:26:29.837 [job1] 00:26:29.837 filename=/dev/nvme10n1 00:26:29.837 [job2] 00:26:29.837 filename=/dev/nvme1n1 00:26:29.837 [job3] 00:26:29.837 filename=/dev/nvme2n1 00:26:29.837 [job4] 00:26:29.837 filename=/dev/nvme3n1 00:26:29.837 [job5] 00:26:29.837 filename=/dev/nvme4n1 00:26:29.837 [job6] 00:26:29.837 filename=/dev/nvme5n1 00:26:29.837 [job7] 00:26:29.837 filename=/dev/nvme6n1 00:26:29.837 [job8] 00:26:29.837 filename=/dev/nvme7n1 00:26:29.837 [job9] 00:26:29.837 filename=/dev/nvme8n1 00:26:29.837 [job10] 00:26:29.837 filename=/dev/nvme9n1 00:26:29.837 Could not set queue depth (nvme0n1) 00:26:29.837 Could not set queue depth (nvme10n1) 00:26:29.837 Could not set queue depth (nvme1n1) 00:26:29.837 Could not set queue depth (nvme2n1) 00:26:29.837 Could not set queue depth (nvme3n1) 00:26:29.837 Could not set queue depth (nvme4n1) 00:26:29.837 Could not set queue depth (nvme5n1) 00:26:29.837 Could not set queue depth (nvme6n1) 00:26:29.837 Could not set queue depth (nvme7n1) 00:26:29.837 Could not set queue depth (nvme8n1) 00:26:29.837 Could not set queue depth (nvme9n1) 00:26:30.094 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.094 fio-3.35 00:26:30.094 Starting 11 threads 00:26:42.297 00:26:42.297 job0: (groupid=0, jobs=1): err= 0: pid=311993: Sat Dec 7 00:53:56 2024 00:26:42.297 read: IOPS=391, BW=97.8MiB/s (103MB/s)(1003MiB/10247msec) 00:26:42.297 slat (usec): min=7, max=679361, avg=1947.16, stdev=14712.24 00:26:42.297 clat (usec): min=1139, max=1318.5k, avg=161485.96, stdev=172585.05 00:26:42.297 lat (usec): min=1165, max=1318.6k, avg=163433.13, stdev=174049.10 00:26:42.297 clat percentiles (msec): 00:26:42.297 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 41], 00:26:42.297 | 30.00th=[ 61], 40.00th=[ 94], 50.00th=[ 155], 60.00th=[ 178], 00:26:42.297 | 70.00th=[ 201], 80.00th=[ 224], 90.00th=[ 268], 95.00th=[ 447], 00:26:42.297 | 99.00th=[ 1167], 99.50th=[ 1217], 99.90th=[ 1318], 99.95th=[ 1318], 00:26:42.297 | 99.99th=[ 1318] 00:26:42.297 bw ( KiB/s): min= 8704, max=345600, per=16.23%, avg=101017.60, stdev=79613.01, samples=20 00:26:42.297 iops : min= 34, max= 1350, avg=394.60, stdev=310.99, samples=20 00:26:42.297 lat (msec) : 2=0.62%, 4=2.02%, 10=4.24%, 20=5.99%, 50=10.50% 00:26:42.297 lat (msec) : 100=17.78%, 250=46.36%, 500=8.68%, 750=2.22%, 1000=0.45% 00:26:42.297 lat (msec) : 2000=1.15% 00:26:42.297 cpu : usr=0.05%, sys=0.96%, ctx=1013, majf=0, minf=4098 00:26:42.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.297 issued rwts: total=4010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.297 job1: (groupid=0, jobs=1): err= 0: pid=311996: Sat Dec 7 00:53:56 2024 00:26:42.297 read: IOPS=243, BW=60.9MiB/s (63.9MB/s)(619MiB/10156msec) 00:26:42.297 slat (usec): min=8, max=690717, avg=2509.83, stdev=23831.91 00:26:42.297 clat (usec): min=1567, max=1572.0k, avg=260002.50, stdev=303409.99 00:26:42.297 lat (usec): min=1649, max=1572.1k, avg=262512.33, stdev=306752.27 00:26:42.297 clat percentiles (msec): 00:26:42.297 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:26:42.297 | 30.00th=[ 24], 40.00th=[ 83], 50.00th=[ 144], 60.00th=[ 239], 00:26:42.297 | 70.00th=[ 334], 80.00th=[ 464], 90.00th=[ 718], 95.00th=[ 944], 00:26:42.297 | 99.00th=[ 1200], 99.50th=[ 1250], 99.90th=[ 1284], 99.95th=[ 1385], 00:26:42.297 | 99.99th=[ 1569] 00:26:42.297 bw ( KiB/s): min= 5632, max=370688, per=9.91%, avg=61696.00, stdev=78469.04, samples=20 00:26:42.297 iops : min= 22, max= 1448, avg=241.00, stdev=306.52, samples=20 00:26:42.297 lat (msec) : 2=0.04%, 4=0.12%, 10=20.98%, 20=4.49%, 50=10.79% 00:26:42.297 lat (msec) : 100=7.44%, 250=17.38%, 500=21.38%, 750=8.85%, 1000=4.45% 00:26:42.297 lat (msec) : 2000=4.08% 00:26:42.297 cpu : usr=0.15%, sys=0.84%, ctx=960, majf=0, minf=4097 00:26:42.297 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:42.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.297 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.297 job2: (groupid=0, jobs=1): err= 0: pid=311997: Sat Dec 7 00:53:56 2024 00:26:42.297 read: IOPS=161, BW=40.4MiB/s (42.4MB/s)(414MiB/10253msec) 00:26:42.297 slat (usec): min=9, max=583344, avg=5393.75, stdev=28563.37 00:26:42.297 clat (msec): min=28, max=1398, avg=390.32, stdev=362.10 00:26:42.297 lat (msec): min=28, max=1398, avg=395.72, stdev=367.40 00:26:42.297 clat percentiles (msec): 00:26:42.297 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 56], 00:26:42.297 | 30.00th=[ 63], 40.00th=[ 163], 50.00th=[ 251], 60.00th=[ 380], 00:26:42.297 | 70.00th=[ 584], 80.00th=[ 709], 90.00th=[ 969], 95.00th=[ 1099], 00:26:42.297 | 99.00th=[ 1250], 99.50th=[ 1301], 99.90th=[ 1334], 99.95th=[ 1401], 00:26:42.297 | 99.99th=[ 1401] 00:26:42.297 bw ( KiB/s): min=12288, max=267776, per=6.55%, avg=40780.80, stdev=55979.50, samples=20 00:26:42.297 iops : min= 48, max= 1046, avg=159.30, stdev=218.67, samples=20 00:26:42.297 lat (msec) : 50=16.11%, 100=19.31%, 250=14.60%, 500=12.37%, 750=18.77% 00:26:42.297 lat (msec) : 1000=11.47%, 2000=7.36% 00:26:42.297 cpu : usr=0.08%, sys=0.45%, ctx=257, majf=0, minf=4097 00:26:42.297 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:42.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.297 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.297 issued rwts: total=1657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job3: (groupid=0, jobs=1): err= 0: pid=311998: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=300, BW=75.1MiB/s (78.7MB/s)(762MiB/10153msec) 00:26:42.298 slat (usec): min=7, max=421573, avg=2609.48, stdev=14654.10 00:26:42.298 clat (usec): min=1475, max=678153, avg=210429.90, stdev=147079.44 00:26:42.298 lat (msec): min=2, max=804, avg=213.04, stdev=149.26 00:26:42.298 clat percentiles (msec): 00:26:42.298 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 51], 20.00th=[ 71], 00:26:42.298 | 30.00th=[ 116], 40.00th=[ 157], 50.00th=[ 178], 60.00th=[ 215], 00:26:42.298 | 70.00th=[ 253], 80.00th=[ 334], 90.00th=[ 439], 95.00th=[ 523], 00:26:42.298 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 651], 00:26:42.298 | 99.99th=[ 676] 00:26:42.298 bw ( KiB/s): min=23552, max=215552, per=12.27%, avg=76393.60, stdev=54420.73, samples=20 00:26:42.298 iops : min= 92, max= 842, avg=298.40, stdev=212.59, samples=20 00:26:42.298 lat (msec) : 2=0.03%, 4=0.07%, 10=0.26%, 20=1.84%, 50=7.58% 00:26:42.298 lat (msec) : 100=15.52%, 250=43.64%, 500=25.66%, 750=5.41% 00:26:42.298 cpu : usr=0.07%, sys=0.75%, ctx=584, majf=0, minf=4097 00:26:42.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:42.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.298 issued rwts: total=3048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job4: (groupid=0, jobs=1): err= 0: pid=311999: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=98, BW=24.7MiB/s (25.8MB/s)(253MiB/10253msec) 00:26:42.298 slat (usec): min=12, max=365413, avg=9482.96, stdev=36322.15 00:26:42.298 clat (msec): min=27, max=1449, avg=638.95, stdev=351.16 00:26:42.298 lat (msec): min=27, max=1449, avg=648.44, stdev=356.35 00:26:42.298 clat percentiles (msec): 00:26:42.298 | 1.00th=[ 29], 5.00th=[ 174], 10.00th=[ 209], 20.00th=[ 284], 00:26:42.298 | 30.00th=[ 355], 40.00th=[ 468], 50.00th=[ 634], 60.00th=[ 709], 00:26:42.298 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1116], 95.00th=[ 1183], 00:26:42.298 | 99.00th=[ 1284], 99.50th=[ 1284], 99.90th=[ 1318], 99.95th=[ 1452], 00:26:42.298 | 99.99th=[ 1452] 00:26:42.298 bw ( KiB/s): min=10752, max=67072, per=3.90%, avg=24268.80, stdev=15166.01, samples=20 00:26:42.298 iops : min= 42, max= 262, avg=94.80, stdev=59.24, samples=20 00:26:42.298 lat (msec) : 50=2.97%, 250=11.47%, 500=28.49%, 750=20.08%, 1000=15.13% 00:26:42.298 lat (msec) : 2000=21.86% 00:26:42.298 cpu : usr=0.06%, sys=0.35%, ctx=122, majf=0, minf=4097 00:26:42.298 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:26:42.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.298 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.298 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job5: (groupid=0, jobs=1): err= 0: pid=312001: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=271, BW=67.9MiB/s (71.1MB/s)(689MiB/10158msec) 00:26:42.298 slat (usec): min=8, max=167473, avg=3404.13, stdev=13207.10 00:26:42.298 clat (msec): min=30, max=595, avg=232.21, stdev=140.29 00:26:42.298 lat (msec): min=30, max=614, avg=235.62, stdev=142.13 00:26:42.298 clat percentiles (msec): 00:26:42.298 | 1.00th=[ 52], 5.00th=[ 73], 10.00th=[ 87], 20.00th=[ 104], 00:26:42.298 | 30.00th=[ 116], 40.00th=[ 142], 50.00th=[ 186], 60.00th=[ 253], 00:26:42.298 | 70.00th=[ 321], 80.00th=[ 376], 90.00th=[ 443], 95.00th=[ 498], 00:26:42.298 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 600], 00:26:42.298 | 99.99th=[ 600] 00:26:42.298 bw ( KiB/s): min=29696, max=155648, per=11.07%, avg=68940.80, stdev=39621.86, samples=20 00:26:42.298 iops : min= 116, max= 608, avg=269.30, stdev=154.77, samples=20 00:26:42.298 lat (msec) : 50=0.94%, 100=16.65%, 250=41.64%, 500=36.20%, 750=4.57% 00:26:42.298 cpu : usr=0.05%, sys=0.90%, ctx=377, majf=0, minf=4098 00:26:42.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:42.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.298 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job6: (groupid=0, jobs=1): err= 0: pid=312002: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=164, BW=41.2MiB/s (43.2MB/s)(422MiB/10250msec) 00:26:42.298 slat (usec): min=7, max=387241, avg=3673.64, stdev=23130.70 00:26:42.298 clat (msec): min=22, max=1439, avg=384.62, stdev=380.14 00:26:42.298 lat (msec): min=22, max=1439, avg=388.30, stdev=383.41 00:26:42.298 clat percentiles (msec): 00:26:42.298 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 59], 00:26:42.298 | 30.00th=[ 77], 40.00th=[ 113], 50.00th=[ 192], 60.00th=[ 359], 00:26:42.298 | 70.00th=[ 550], 80.00th=[ 852], 90.00th=[ 1053], 95.00th=[ 1099], 00:26:42.298 | 99.00th=[ 1267], 99.50th=[ 1301], 99.90th=[ 1401], 99.95th=[ 1435], 00:26:42.298 | 99.99th=[ 1435] 00:26:42.298 bw ( KiB/s): min=11264, max=252416, per=6.68%, avg=41574.40, stdev=54995.67, samples=20 00:26:42.298 iops : min= 44, max= 986, avg=162.40, stdev=214.83, samples=20 00:26:42.298 lat (msec) : 50=11.73%, 100=26.13%, 250=15.88%, 500=12.26%, 750=11.91% 00:26:42.298 lat (msec) : 1000=9.95%, 2000=12.14% 00:26:42.298 cpu : usr=0.03%, sys=0.45%, ctx=193, majf=0, minf=4097 00:26:42.298 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:42.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.298 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.298 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job7: (groupid=0, jobs=1): err= 0: pid=312003: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=244, BW=61.0MiB/s (64.0MB/s)(625MiB/10243msec) 00:26:42.298 slat (usec): min=8, max=547162, avg=2456.67, stdev=20955.77 00:26:42.298 clat (usec): min=1468, max=1432.6k, avg=259426.16, stdev=293301.48 00:26:42.298 lat (usec): min=1502, max=1432.6k, avg=261882.82, stdev=297223.11 00:26:42.298 clat percentiles (msec): 00:26:42.298 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 23], 00:26:42.298 | 30.00th=[ 97], 40.00th=[ 136], 50.00th=[ 148], 60.00th=[ 176], 00:26:42.298 | 70.00th=[ 236], 80.00th=[ 468], 90.00th=[ 785], 95.00th=[ 953], 00:26:42.298 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1368], 00:26:42.298 | 99.99th=[ 1435] 00:26:42.298 bw ( KiB/s): min= 5632, max=205824, per=10.03%, avg=62412.80, stdev=54959.36, samples=20 00:26:42.298 iops : min= 22, max= 804, avg=243.80, stdev=214.68, samples=20 00:26:42.298 lat (msec) : 2=0.16%, 4=2.48%, 10=9.84%, 20=5.44%, 50=9.00% 00:26:42.298 lat (msec) : 100=3.56%, 250=41.02%, 500=9.60%, 750=8.00%, 1000=7.24% 00:26:42.298 lat (msec) : 2000=3.68% 00:26:42.298 cpu : usr=0.08%, sys=0.70%, ctx=783, majf=0, minf=4097 00:26:42.298 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:42.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.298 issued rwts: total=2501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.298 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.298 job8: (groupid=0, jobs=1): err= 0: pid=312004: Sat Dec 7 00:53:56 2024 00:26:42.298 read: IOPS=234, BW=58.6MiB/s (61.5MB/s)(595MiB/10155msec) 00:26:42.298 slat (usec): min=8, max=386361, avg=3518.13, stdev=16977.95 00:26:42.298 clat (msec): min=38, max=948, avg=269.27, stdev=181.55 00:26:42.298 lat (msec): min=38, max=979, avg=272.79, stdev=183.25 00:26:42.298 clat percentiles (msec): 00:26:42.299 | 1.00th=[ 69], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 91], 00:26:42.299 | 30.00th=[ 122], 40.00th=[ 205], 50.00th=[ 239], 60.00th=[ 288], 00:26:42.299 | 70.00th=[ 342], 80.00th=[ 409], 90.00th=[ 514], 95.00th=[ 592], 00:26:42.299 | 99.00th=[ 902], 99.50th=[ 919], 99.90th=[ 953], 99.95th=[ 953], 00:26:42.299 | 99.99th=[ 953] 00:26:42.299 bw ( KiB/s): min=14848, max=179712, per=9.53%, avg=59315.20, stdev=44072.18, samples=20 00:26:42.299 iops : min= 58, max= 702, avg=231.70, stdev=172.16, samples=20 00:26:42.299 lat (msec) : 50=0.34%, 100=24.32%, 250=28.14%, 500=36.50%, 750=8.27% 00:26:42.299 lat (msec) : 1000=2.44% 00:26:42.299 cpu : usr=0.10%, sys=0.63%, ctx=350, majf=0, minf=3721 00:26:42.299 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:42.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.299 issued rwts: total=2381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.299 job9: (groupid=0, jobs=1): err= 0: pid=312005: Sat Dec 7 00:53:56 2024 00:26:42.299 read: IOPS=163, BW=41.0MiB/s (42.9MB/s)(420MiB/10242msec) 00:26:42.299 slat (usec): min=12, max=688137, avg=4448.53, stdev=29555.61 00:26:42.299 clat (msec): min=23, max=1429, avg=385.71, stdev=388.37 00:26:42.299 lat (msec): min=23, max=1710, avg=390.16, stdev=393.17 00:26:42.299 clat percentiles (msec): 00:26:42.299 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 56], 20.00th=[ 79], 00:26:42.299 | 30.00th=[ 92], 40.00th=[ 109], 50.00th=[ 129], 60.00th=[ 368], 00:26:42.299 | 70.00th=[ 592], 80.00th=[ 776], 90.00th=[ 1003], 95.00th=[ 1133], 00:26:42.299 | 99.00th=[ 1435], 99.50th=[ 1435], 99.90th=[ 1435], 99.95th=[ 1435], 00:26:42.299 | 99.99th=[ 1435] 00:26:42.299 bw ( KiB/s): min= 7168, max=186880, per=6.99%, avg=43496.68, stdev=50901.37, samples=19 00:26:42.299 iops : min= 28, max= 730, avg=169.89, stdev=198.84, samples=19 00:26:42.299 lat (msec) : 50=8.05%, 100=25.92%, 250=21.57%, 500=10.01%, 750=13.59% 00:26:42.299 lat (msec) : 1000=10.67%, 2000=10.19% 00:26:42.299 cpu : usr=0.05%, sys=0.48%, ctx=220, majf=0, minf=4097 00:26:42.299 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:42.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.299 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.299 issued rwts: total=1678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.299 job10: (groupid=0, jobs=1): err= 0: pid=312006: Sat Dec 7 00:53:56 2024 00:26:42.299 read: IOPS=168, BW=42.1MiB/s (44.2MB/s)(432MiB/10252msec) 00:26:42.299 slat (usec): min=11, max=373014, avg=5641.91, stdev=26299.35 00:26:42.299 clat (msec): min=10, max=1454, avg=373.89, stdev=376.32 00:26:42.299 lat (msec): min=11, max=1510, avg=379.53, stdev=381.82 00:26:42.299 clat percentiles (msec): 00:26:42.299 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 47], 00:26:42.299 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 284], 60.00th=[ 409], 00:26:42.299 | 70.00th=[ 518], 80.00th=[ 743], 90.00th=[ 969], 95.00th=[ 1083], 00:26:42.299 | 99.00th=[ 1334], 99.50th=[ 1418], 99.90th=[ 1452], 99.95th=[ 1452], 00:26:42.299 | 99.99th=[ 1452] 00:26:42.299 bw ( KiB/s): min= 8192, max=334848, per=6.84%, avg=42572.80, stdev=72020.32, samples=20 00:26:42.299 iops : min= 32, max= 1308, avg=166.30, stdev=281.33, samples=20 00:26:42.299 lat (msec) : 20=0.98%, 50=27.04%, 100=17.31%, 250=3.53%, 500=20.27% 00:26:42.299 lat (msec) : 750=11.52%, 1000=11.46%, 2000=7.87% 00:26:42.299 cpu : usr=0.04%, sys=0.54%, ctx=196, majf=0, minf=4098 00:26:42.299 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.4% 00:26:42.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.299 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.299 issued rwts: total=1727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.299 00:26:42.299 Run status group 0 (all jobs): 00:26:42.299 READ: bw=608MiB/s (637MB/s), 24.7MiB/s-97.8MiB/s (25.8MB/s-103MB/s), io=6233MiB (6536MB), run=10153-10253msec 00:26:42.299 00:26:42.299 Disk stats (read/write): 00:26:42.299 nvme0n1: ios=7967/0, merge=0/0, ticks=1247903/0, in_queue=1247903, util=97.42% 00:26:42.299 nvme10n1: ios=4805/0, merge=0/0, ticks=1234747/0, in_queue=1234747, util=97.55% 00:26:42.299 nvme1n1: ios=3210/0, merge=0/0, ticks=1213559/0, in_queue=1213559, util=97.87% 00:26:42.299 nvme2n1: ios=5969/0, merge=0/0, ticks=1227623/0, in_queue=1227623, util=97.93% 00:26:42.299 nvme3n1: ios=1925/0, merge=0/0, ticks=1238829/0, in_queue=1238829, util=98.08% 00:26:42.299 nvme4n1: ios=5339/0, merge=0/0, ticks=1227932/0, in_queue=1227932, util=98.35% 00:26:42.299 nvme5n1: ios=3300/0, merge=0/0, ticks=1259664/0, in_queue=1259664, util=98.53% 00:26:42.299 nvme6n1: ios=4942/0, merge=0/0, ticks=1250407/0, in_queue=1250407, util=98.64% 00:26:42.299 nvme7n1: ios=4619/0, merge=0/0, ticks=1233074/0, in_queue=1233074, util=98.97% 00:26:42.299 nvme8n1: ios=3293/0, merge=0/0, ticks=1249104/0, in_queue=1249104, util=99.13% 00:26:42.299 nvme9n1: ios=3357/0, merge=0/0, ticks=1238069/0, in_queue=1238069, util=99.27% 00:26:42.299 00:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:42.299 [global] 00:26:42.299 thread=1 00:26:42.299 invalidate=1 00:26:42.299 rw=randwrite 00:26:42.299 time_based=1 00:26:42.299 runtime=10 00:26:42.299 ioengine=libaio 00:26:42.299 direct=1 00:26:42.299 bs=262144 00:26:42.299 iodepth=64 00:26:42.299 norandommap=1 00:26:42.299 numjobs=1 00:26:42.299 00:26:42.299 [job0] 00:26:42.299 filename=/dev/nvme0n1 00:26:42.299 [job1] 00:26:42.299 filename=/dev/nvme10n1 00:26:42.299 [job2] 00:26:42.299 filename=/dev/nvme1n1 00:26:42.299 [job3] 00:26:42.299 filename=/dev/nvme2n1 00:26:42.299 [job4] 00:26:42.299 filename=/dev/nvme3n1 00:26:42.299 [job5] 00:26:42.299 filename=/dev/nvme4n1 00:26:42.299 [job6] 00:26:42.299 filename=/dev/nvme5n1 00:26:42.299 [job7] 00:26:42.299 filename=/dev/nvme6n1 00:26:42.299 [job8] 00:26:42.299 filename=/dev/nvme7n1 00:26:42.299 [job9] 00:26:42.299 filename=/dev/nvme8n1 00:26:42.299 [job10] 00:26:42.299 filename=/dev/nvme9n1 00:26:42.299 Could not set queue depth (nvme0n1) 00:26:42.299 Could not set queue depth (nvme10n1) 00:26:42.299 Could not set queue depth (nvme1n1) 00:26:42.299 Could not set queue depth (nvme2n1) 00:26:42.299 Could not set queue depth (nvme3n1) 00:26:42.299 Could not set queue depth (nvme4n1) 00:26:42.299 Could not set queue depth (nvme5n1) 00:26:42.299 Could not set queue depth (nvme6n1) 00:26:42.299 Could not set queue depth (nvme7n1) 00:26:42.299 Could not set queue depth (nvme8n1) 00:26:42.299 Could not set queue depth (nvme9n1) 00:26:42.299 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.299 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.299 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.300 fio-3.35 00:26:42.300 Starting 11 threads 00:26:52.284 00:26:52.284 job0: (groupid=0, jobs=1): err= 0: pid=312736: Sat Dec 7 00:54:07 2024 00:26:52.284 write: IOPS=265, BW=66.4MiB/s (69.6MB/s)(674MiB/10151msec); 0 zone resets 00:26:52.284 slat (usec): min=13, max=141005, avg=2823.26, stdev=10365.70 00:26:52.284 clat (usec): min=1200, max=1124.9k, avg=237963.21, stdev=265159.95 00:26:52.284 lat (usec): min=1899, max=1125.0k, avg=240786.47, stdev=268870.29 00:26:52.284 clat percentiles (msec): 00:26:52.284 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 27], 20.00th=[ 53], 00:26:52.284 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 89], 60.00th=[ 161], 00:26:52.284 | 70.00th=[ 305], 80.00th=[ 426], 90.00th=[ 693], 95.00th=[ 810], 00:26:52.284 | 99.00th=[ 1020], 99.50th=[ 1062], 99.90th=[ 1116], 99.95th=[ 1116], 00:26:52.284 | 99.99th=[ 1133] 00:26:52.284 bw ( KiB/s): min=14336, max=301056, per=7.28%, avg=67379.80, stdev=81106.32, samples=20 00:26:52.284 iops : min= 56, max= 1176, avg=263.20, stdev=316.82, samples=20 00:26:52.284 lat (msec) : 2=0.07%, 4=0.30%, 10=2.93%, 20=4.64%, 50=11.13% 00:26:52.284 lat (msec) : 100=32.31%, 250=15.10%, 500=15.76%, 750=11.28%, 1000=5.34% 00:26:52.284 lat (msec) : 2000=1.15% 00:26:52.284 cpu : usr=0.82%, sys=0.91%, ctx=1548, majf=0, minf=1 00:26:52.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:52.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.284 issued rwts: total=0,2696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.284 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.285 job1: (groupid=0, jobs=1): err= 0: pid=312749: Sat Dec 7 00:54:07 2024 00:26:52.285 write: IOPS=196, BW=49.2MiB/s (51.5MB/s)(500MiB/10177msec); 0 zone resets 00:26:52.285 slat (usec): min=24, max=333330, avg=4134.19, stdev=16326.02 00:26:52.285 clat (msec): min=17, max=1255, avg=321.02, stdev=263.81 00:26:52.285 lat (msec): min=17, max=1255, avg=325.15, stdev=266.93 00:26:52.285 clat percentiles (msec): 00:26:52.285 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 66], 20.00th=[ 107], 00:26:52.285 | 30.00th=[ 157], 40.00th=[ 213], 50.00th=[ 262], 60.00th=[ 305], 00:26:52.285 | 70.00th=[ 363], 80.00th=[ 426], 90.00th=[ 776], 95.00th=[ 894], 00:26:52.285 | 99.00th=[ 1217], 99.50th=[ 1234], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:52.285 | 99.99th=[ 1250] 00:26:52.285 bw ( KiB/s): min= 8192, max=160768, per=5.36%, avg=49581.85, stdev=39145.60, samples=20 00:26:52.285 iops : min= 32, max= 628, avg=193.65, stdev=152.91, samples=20 00:26:52.285 lat (msec) : 20=0.05%, 50=5.70%, 100=11.49%, 250=31.18%, 500=35.98% 00:26:52.285 lat (msec) : 750=4.55%, 1000=7.95%, 2000=3.10% 00:26:52.285 cpu : usr=0.65%, sys=0.75%, ctx=889, majf=0, minf=1 00:26:52.285 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:52.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.285 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.285 issued rwts: total=0,2001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.285 job2: (groupid=0, jobs=1): err= 0: pid=312750: Sat Dec 7 00:54:07 2024 00:26:52.285 write: IOPS=375, BW=94.0MiB/s (98.5MB/s)(956MiB/10170msec); 0 zone resets 00:26:52.285 slat (usec): min=15, max=217279, avg=1808.79, stdev=7607.70 00:26:52.285 clat (usec): min=1167, max=1117.0k, avg=168404.19, stdev=225289.38 00:26:52.285 lat (usec): min=1451, max=1117.1k, avg=170212.98, stdev=227215.29 00:26:52.285 clat percentiles (msec): 00:26:52.285 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 15], 00:26:52.285 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 51], 60.00th=[ 97], 00:26:52.285 | 70.00th=[ 218], 80.00th=[ 313], 90.00th=[ 418], 95.00th=[ 676], 00:26:52.285 | 99.00th=[ 1062], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1099], 00:26:52.285 | 99.99th=[ 1116] 00:26:52.285 bw ( KiB/s): min=12288, max=321536, per=10.40%, avg=96224.30, stdev=96627.38, samples=20 00:26:52.285 iops : min= 48, max= 1256, avg=375.85, stdev=377.46, samples=20 00:26:52.285 lat (msec) : 2=0.42%, 4=1.83%, 10=9.55%, 20=10.88%, 50=26.87% 00:26:52.285 lat (msec) : 100=10.78%, 250=12.19%, 500=20.64%, 750=2.59%, 1000=2.62% 00:26:52.285 lat (msec) : 2000=1.62% 00:26:52.285 cpu : usr=1.16%, sys=1.08%, ctx=2208, majf=0, minf=1 00:26:52.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:52.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.285 issued rwts: total=0,3822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.285 job3: (groupid=0, jobs=1): err= 0: pid=312751: Sat Dec 7 00:54:07 2024 00:26:52.285 write: IOPS=347, BW=86.8MiB/s (91.1MB/s)(881MiB/10139msec); 0 zone resets 00:26:52.285 slat (usec): min=16, max=222369, avg=2364.74, stdev=10430.44 00:26:52.285 clat (usec): min=696, max=1263.6k, avg=181791.91, stdev=227746.67 00:26:52.285 lat (usec): min=752, max=1263.7k, avg=184156.65, stdev=230966.48 00:26:52.285 clat percentiles (usec): 00:26:52.285 | 1.00th=[ 1778], 5.00th=[ 5997], 10.00th=[ 16712], 00:26:52.285 | 20.00th=[ 43779], 30.00th=[ 81265], 40.00th=[ 87557], 00:26:52.285 | 50.00th=[ 96994], 60.00th=[ 113771], 70.00th=[ 156238], 00:26:52.285 | 80.00th=[ 267387], 90.00th=[ 400557], 95.00th=[ 834667], 00:26:52.285 | 99.00th=[1115685], 99.50th=[1182794], 99.90th=[1249903], 00:26:52.285 | 99.95th=[1266680], 99.99th=[1266680] 00:26:52.285 bw ( KiB/s): min=10240, max=265216, per=9.57%, avg=88570.00, stdev=79210.71, samples=20 00:26:52.285 iops : min= 40, max= 1036, avg=345.95, stdev=309.38, samples=20 00:26:52.285 lat (usec) : 750=0.06%, 1000=0.20% 00:26:52.285 lat (msec) : 2=1.05%, 4=1.93%, 10=3.09%, 20=5.71%, 50=9.31% 00:26:52.285 lat (msec) : 100=31.37%, 250=24.59%, 500=15.28%, 750=1.79%, 1000=3.95% 00:26:52.285 lat (msec) : 2000=1.68% 00:26:52.285 cpu : usr=1.04%, sys=1.16%, ctx=1880, majf=0, minf=2 00:26:52.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:52.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.285 issued rwts: total=0,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.285 job4: (groupid=0, jobs=1): err= 0: pid=312752: Sat Dec 7 00:54:07 2024 00:26:52.285 write: IOPS=225, BW=56.4MiB/s (59.1MB/s)(571MiB/10131msec); 0 zone resets 00:26:52.285 slat (usec): min=18, max=123387, avg=3308.28, stdev=9742.91 00:26:52.285 clat (usec): min=802, max=1067.1k, avg=280313.37, stdev=246550.19 00:26:52.285 lat (usec): min=844, max=1081.2k, avg=283621.65, stdev=249037.62 00:26:52.285 clat percentiles (usec): 00:26:52.285 | 1.00th=[ 1156], 5.00th=[ 2474], 10.00th=[ 10552], 00:26:52.285 | 20.00th=[ 40633], 30.00th=[ 127402], 40.00th=[ 179307], 00:26:52.285 | 50.00th=[ 229639], 60.00th=[ 283116], 70.00th=[ 316670], 00:26:52.285 | 80.00th=[ 459277], 90.00th=[ 692061], 95.00th=[ 801113], 00:26:52.285 | 99.00th=[ 994051], 99.50th=[1027605], 99.90th=[1061159], 00:26:52.285 | 99.95th=[1061159], 99.99th=[1069548] 00:26:52.285 bw ( KiB/s): min=16896, max=146944, per=6.14%, avg=56876.35, stdev=37239.28, samples=20 00:26:52.285 iops : min= 66, max= 574, avg=222.15, stdev=145.46, samples=20 00:26:52.285 lat (usec) : 1000=0.66% 00:26:52.285 lat (msec) : 2=2.54%, 4=3.02%, 10=2.89%, 20=5.47%, 50=6.00% 00:26:52.285 lat (msec) : 100=4.46%, 250=28.88%, 500=27.92%, 750=11.33%, 1000=5.91% 00:26:52.285 lat (msec) : 2000=0.92% 00:26:52.285 cpu : usr=0.70%, sys=0.84%, ctx=1171, majf=0, minf=1 00:26:52.285 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:52.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.285 issued rwts: total=0,2285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.285 job5: (groupid=0, jobs=1): err= 0: pid=312753: Sat Dec 7 00:54:07 2024 00:26:52.285 write: IOPS=317, BW=79.3MiB/s (83.1MB/s)(807MiB/10180msec); 0 zone resets 00:26:52.285 slat (usec): min=20, max=190893, avg=2542.59, stdev=9525.01 00:26:52.285 clat (usec): min=1820, max=1178.1k, avg=198558.53, stdev=241566.59 00:26:52.285 lat (msec): min=2, max=1194, avg=201.10, stdev=244.99 00:26:52.285 clat percentiles (msec): 00:26:52.285 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 40], 00:26:52.285 | 30.00th=[ 42], 40.00th=[ 71], 50.00th=[ 89], 60.00th=[ 121], 00:26:52.285 | 70.00th=[ 247], 80.00th=[ 338], 90.00th=[ 625], 95.00th=[ 760], 00:26:52.285 | 99.00th=[ 1028], 99.50th=[ 1083], 99.90th=[ 1150], 99.95th=[ 1167], 00:26:52.285 | 99.99th=[ 1183] 00:26:52.285 bw ( KiB/s): min=14336, max=224768, per=8.75%, avg=81018.00, stdev=77148.37, samples=20 00:26:52.285 iops : min= 56, max= 878, avg=316.45, stdev=301.37, samples=20 00:26:52.285 lat (msec) : 2=0.03%, 4=0.96%, 10=6.26%, 20=4.93%, 50=24.07% 00:26:52.285 lat (msec) : 100=17.32%, 250=16.67%, 500=17.84%, 750=6.82%, 1000=3.93% 00:26:52.285 lat (msec) : 2000=1.18% 00:26:52.285 cpu : usr=1.13%, sys=1.49%, ctx=1706, majf=0, minf=1 00:26:52.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:52.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.286 issued rwts: total=0,3228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.286 job6: (groupid=0, jobs=1): err= 0: pid=312754: Sat Dec 7 00:54:07 2024 00:26:52.286 write: IOPS=638, BW=160MiB/s (167MB/s)(1618MiB/10126msec); 0 zone resets 00:26:52.286 slat (usec): min=14, max=30092, avg=537.99, stdev=2109.96 00:26:52.286 clat (usec): min=779, max=1043.0k, avg=99583.45, stdev=125272.10 00:26:52.286 lat (usec): min=822, max=1043.1k, avg=100121.44, stdev=125432.11 00:26:52.286 clat percentiles (usec): 00:26:52.286 | 1.00th=[ 1549], 5.00th=[ 5276], 10.00th=[ 13698], 00:26:52.286 | 20.00th=[ 25560], 30.00th=[ 38536], 40.00th=[ 45351], 00:26:52.286 | 50.00th=[ 50070], 60.00th=[ 78119], 70.00th=[ 112722], 00:26:52.286 | 80.00th=[ 137364], 90.00th=[ 221250], 95.00th=[ 325059], 00:26:52.286 | 99.00th=[ 683672], 99.50th=[ 834667], 99.90th=[1019216], 00:26:52.286 | 99.95th=[1035994], 99.99th=[1044382] 00:26:52.286 bw ( KiB/s): min=38912, max=418304, per=17.72%, avg=164019.20, stdev=91015.00, samples=20 00:26:52.286 iops : min= 152, max= 1634, avg=640.70, stdev=355.53, samples=20 00:26:52.286 lat (usec) : 1000=0.49% 00:26:52.286 lat (msec) : 2=0.82%, 4=2.10%, 10=4.47%, 20=5.27%, 50=36.71% 00:26:52.286 lat (msec) : 100=14.62%, 250=27.51%, 500=5.83%, 750=1.39%, 1000=0.65% 00:26:52.286 lat (msec) : 2000=0.14% 00:26:52.286 cpu : usr=1.90%, sys=2.40%, ctx=4773, majf=0, minf=1 00:26:52.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:52.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.286 issued rwts: total=0,6470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.286 job7: (groupid=0, jobs=1): err= 0: pid=312755: Sat Dec 7 00:54:07 2024 00:26:52.286 write: IOPS=381, BW=95.5MiB/s (100MB/s)(972MiB/10181msec); 0 zone resets 00:26:52.286 slat (usec): min=19, max=310284, avg=1506.66, stdev=11379.22 00:26:52.286 clat (usec): min=796, max=1376.3k, avg=165877.33, stdev=236474.25 00:26:52.286 lat (usec): min=818, max=1376.4k, avg=167383.99, stdev=237715.21 00:26:52.286 clat percentiles (usec): 00:26:52.286 | 1.00th=[ 1582], 5.00th=[ 5080], 10.00th=[ 8225], 00:26:52.286 | 20.00th=[ 15533], 30.00th=[ 27657], 40.00th=[ 55837], 00:26:52.286 | 50.00th=[ 96994], 60.00th=[ 121111], 70.00th=[ 143655], 00:26:52.286 | 80.00th=[ 244319], 90.00th=[ 459277], 95.00th=[ 692061], 00:26:52.286 | 99.00th=[1115685], 99.50th=[1249903], 99.90th=[1333789], 00:26:52.286 | 99.95th=[1333789], 99.99th=[1384121] 00:26:52.286 bw ( KiB/s): min=36864, max=174080, per=10.58%, avg=97914.05, stdev=42012.05, samples=20 00:26:52.286 iops : min= 144, max= 680, avg=382.45, stdev=164.14, samples=20 00:26:52.286 lat (usec) : 1000=0.36% 00:26:52.286 lat (msec) : 2=1.08%, 4=2.65%, 10=8.46%, 20=12.21%, 50=12.42% 00:26:52.286 lat (msec) : 100=13.83%, 250=29.49%, 500=10.47%, 750=4.50%, 1000=1.72% 00:26:52.286 lat (msec) : 2000=2.80% 00:26:52.286 cpu : usr=1.28%, sys=1.80%, ctx=2624, majf=0, minf=1 00:26:52.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:52.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.286 issued rwts: total=0,3889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.286 job8: (groupid=0, jobs=1): err= 0: pid=312756: Sat Dec 7 00:54:07 2024 00:26:52.286 write: IOPS=350, BW=87.7MiB/s (92.0MB/s)(889MiB/10126msec); 0 zone resets 00:26:52.286 slat (usec): min=17, max=301818, avg=1539.90, stdev=8058.13 00:26:52.286 clat (usec): min=1059, max=1162.6k, avg=180646.63, stdev=206513.43 00:26:52.286 lat (usec): min=1118, max=1162.7k, avg=182186.52, stdev=208068.53 00:26:52.286 clat percentiles (msec): 00:26:52.286 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 54], 00:26:52.286 | 30.00th=[ 86], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 133], 00:26:52.286 | 70.00th=[ 178], 80.00th=[ 243], 90.00th=[ 326], 95.00th=[ 701], 00:26:52.286 | 99.00th=[ 1003], 99.50th=[ 1062], 99.90th=[ 1116], 99.95th=[ 1150], 00:26:52.286 | 99.99th=[ 1167] 00:26:52.286 bw ( KiB/s): min=16384, max=208384, per=9.65%, avg=89356.85, stdev=47257.09, samples=20 00:26:52.286 iops : min= 64, max= 814, avg=349.05, stdev=184.60, samples=20 00:26:52.286 lat (msec) : 2=0.62%, 4=1.13%, 10=3.10%, 20=6.70%, 50=7.71% 00:26:52.286 lat (msec) : 100=15.87%, 250=45.78%, 500=10.86%, 750=3.55%, 1000=3.60% 00:26:52.286 lat (msec) : 2000=1.10% 00:26:52.286 cpu : usr=1.08%, sys=1.20%, ctx=2279, majf=0, minf=1 00:26:52.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:52.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.286 issued rwts: total=0,3554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.286 job9: (groupid=0, jobs=1): err= 0: pid=312757: Sat Dec 7 00:54:07 2024 00:26:52.286 write: IOPS=302, BW=75.6MiB/s (79.3MB/s)(770MiB/10175msec); 0 zone resets 00:26:52.286 slat (usec): min=14, max=244935, avg=1920.73, stdev=8829.81 00:26:52.286 clat (usec): min=1039, max=943191, avg=209548.65, stdev=225800.48 00:26:52.286 lat (usec): min=1085, max=943235, avg=211469.38, stdev=227700.73 00:26:52.286 clat percentiles (msec): 00:26:52.286 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 10], 20.00th=[ 21], 00:26:52.286 | 30.00th=[ 42], 40.00th=[ 78], 50.00th=[ 123], 60.00th=[ 169], 00:26:52.286 | 70.00th=[ 300], 80.00th=[ 376], 90.00th=[ 542], 95.00th=[ 735], 00:26:52.286 | 99.00th=[ 877], 99.50th=[ 885], 99.90th=[ 911], 99.95th=[ 927], 00:26:52.286 | 99.99th=[ 944] 00:26:52.286 bw ( KiB/s): min=16384, max=225280, per=8.34%, avg=77178.60, stdev=57067.01, samples=20 00:26:52.286 iops : min= 64, max= 880, avg=301.45, stdev=222.93, samples=20 00:26:52.286 lat (msec) : 2=0.65%, 4=2.66%, 10=7.08%, 20=9.55%, 50=14.20% 00:26:52.286 lat (msec) : 100=10.82%, 250=19.79%, 500=23.62%, 750=6.95%, 1000=4.68% 00:26:52.286 cpu : usr=0.88%, sys=1.08%, ctx=2216, majf=0, minf=1 00:26:52.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:52.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.286 issued rwts: total=0,3078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.286 job10: (groupid=0, jobs=1): err= 0: pid=312758: Sat Dec 7 00:54:07 2024 00:26:52.286 write: IOPS=222, BW=55.7MiB/s (58.4MB/s)(567MiB/10175msec); 0 zone resets 00:26:52.286 slat (usec): min=16, max=247484, avg=3525.11, stdev=12748.24 00:26:52.286 clat (usec): min=679, max=1203.3k, avg=282903.91, stdev=269647.33 00:26:52.286 lat (usec): min=743, max=1218.0k, avg=286429.02, stdev=272635.36 00:26:52.286 clat percentiles (usec): 00:26:52.286 | 1.00th=[ 1582], 5.00th=[ 3097], 10.00th=[ 4146], 00:26:52.286 | 20.00th=[ 41157], 30.00th=[ 128451], 40.00th=[ 177210], 00:26:52.286 | 50.00th=[ 212861], 60.00th=[ 270533], 70.00th=[ 304088], 00:26:52.286 | 80.00th=[ 396362], 90.00th=[ 725615], 95.00th=[ 918553], 00:26:52.286 | 99.00th=[1132463], 99.50th=[1166017], 99.90th=[1199571], 00:26:52.286 | 99.95th=[1199571], 99.99th=[1199571] 00:26:52.286 bw ( KiB/s): min=12288, max=169472, per=6.09%, avg=56391.40, stdev=37115.91, samples=20 00:26:52.286 iops : min= 48, max= 662, avg=220.25, stdev=144.99, samples=20 00:26:52.287 lat (usec) : 750=0.09%, 1000=0.26% 00:26:52.287 lat (msec) : 2=1.32%, 4=7.86%, 10=8.03%, 20=1.06%, 50=2.21% 00:26:52.287 lat (msec) : 100=2.56%, 250=33.41%, 500=25.38%, 750=8.56%, 1000=6.75% 00:26:52.287 lat (msec) : 2000=2.52% 00:26:52.287 cpu : usr=0.56%, sys=0.91%, ctx=1174, majf=0, minf=1 00:26:52.287 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:52.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.287 issued rwts: total=0,2266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.287 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.287 00:26:52.287 Run status group 0 (all jobs): 00:26:52.287 WRITE: bw=904MiB/s (948MB/s), 49.2MiB/s-160MiB/s (51.5MB/s-167MB/s), io=9203MiB (9650MB), run=10126-10181msec 00:26:52.287 00:26:52.287 Disk stats (read/write): 00:26:52.287 nvme0n1: ios=52/5148, merge=0/0, ticks=5873/1221818, in_queue=1227691, util=100.00% 00:26:52.287 nvme10n1: ios=34/3851, merge=0/0, ticks=2382/1142125, in_queue=1144507, util=99.92% 00:26:52.287 nvme1n1: ios=47/7493, merge=0/0, ticks=69/1216299, in_queue=1216368, util=97.93% 00:26:52.287 nvme2n1: ios=20/6858, merge=0/0, ticks=225/1207892, in_queue=1208117, util=97.96% 00:26:52.287 nvme3n1: ios=0/4380, merge=0/0, ticks=0/1209245, in_queue=1209245, util=97.90% 00:26:52.287 nvme4n1: ios=45/6311, merge=0/0, ticks=913/1205311, in_queue=1206224, util=100.00% 00:26:52.287 nvme5n1: ios=0/12782, merge=0/0, ticks=0/1238815, in_queue=1238815, util=98.39% 00:26:52.287 nvme6n1: ios=46/7630, merge=0/0, ticks=5079/1141276, in_queue=1146355, util=100.00% 00:26:52.287 nvme7n1: ios=22/6948, merge=0/0, ticks=497/1228200, in_queue=1228697, util=99.92% 00:26:52.287 nvme8n1: ios=0/6010, merge=0/0, ticks=0/1219109, in_queue=1219109, util=99.00% 00:26:52.287 nvme9n1: ios=47/4387, merge=0/0, ticks=687/1203528, in_queue=1204215, util=100.00% 00:26:52.287 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:52.287 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:52.287 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.287 00:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:52.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:52.287 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.287 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:52.546 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.546 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:52.805 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.805 00:54:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:53.064 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:53.064 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:53.064 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:53.323 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:53.323 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.323 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:53.582 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:53.582 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:53.583 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.583 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:53.840 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:53.840 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.841 rmmod nvme_tcp 00:26:53.841 rmmod nvme_fabrics 00:26:53.841 rmmod nvme_keyring 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 307739 ']' 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 307739 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 307739 ']' 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 307739 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307739 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307739' 00:26:53.841 killing process with pid 307739 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 307739 00:26:53.841 00:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 307739 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.405 00:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.311 00:26:56.311 real 1m0.347s 00:26:56.311 user 3m33.295s 00:26:56.311 sys 0m15.109s 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.311 ************************************ 00:26:56.311 END TEST nvmf_multiconnection 00:26:56.311 ************************************ 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.311 00:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:56.570 ************************************ 00:26:56.570 START TEST nvmf_initiator_timeout 00:26:56.570 ************************************ 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.570 * Looking for test storage... 00:26:56.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.570 --rc genhtml_branch_coverage=1 00:26:56.570 --rc genhtml_function_coverage=1 00:26:56.570 --rc genhtml_legend=1 00:26:56.570 --rc geninfo_all_blocks=1 00:26:56.570 --rc geninfo_unexecuted_blocks=1 00:26:56.570 00:26:56.570 ' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.570 --rc genhtml_branch_coverage=1 00:26:56.570 --rc genhtml_function_coverage=1 00:26:56.570 --rc genhtml_legend=1 00:26:56.570 --rc geninfo_all_blocks=1 00:26:56.570 --rc geninfo_unexecuted_blocks=1 00:26:56.570 00:26:56.570 ' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.570 --rc genhtml_branch_coverage=1 00:26:56.570 --rc genhtml_function_coverage=1 00:26:56.570 --rc genhtml_legend=1 00:26:56.570 --rc geninfo_all_blocks=1 00:26:56.570 --rc geninfo_unexecuted_blocks=1 00:26:56.570 00:26:56.570 ' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.570 --rc genhtml_branch_coverage=1 00:26:56.570 --rc genhtml_function_coverage=1 00:26:56.570 --rc genhtml_legend=1 00:26:56.570 --rc geninfo_all_blocks=1 00:26:56.570 --rc geninfo_unexecuted_blocks=1 00:26:56.570 00:26:56.570 ' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.570 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.571 00:54:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.101 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:59.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:59.102 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:59.102 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:59.102 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:26:59.102 00:26:59.102 --- 10.0.0.2 ping statistics --- 00:26:59.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.102 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:26:59.102 00:26:59.102 --- 10.0.0.1 ping statistics --- 00:26:59.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.102 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:59.102 00:54:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=316552 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 316552 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 316552 ']' 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.102 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.102 [2024-12-07 00:54:15.064444] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:26:59.102 [2024-12-07 00:54:15.064523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.102 [2024-12-07 00:54:15.137876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.102 [2024-12-07 00:54:15.185609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.102 [2024-12-07 00:54:15.185661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.102 [2024-12-07 00:54:15.185684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.102 [2024-12-07 00:54:15.185695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.102 [2024-12-07 00:54:15.185704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.102 [2024-12-07 00:54:15.187455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.102 [2024-12-07 00:54:15.187514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.102 [2024-12-07 00:54:15.187579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.102 [2024-12-07 00:54:15.187582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.361 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.361 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:59.361 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 Malloc0 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 Delay0 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 [2024-12-07 00:54:15.372343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.362 [2024-12-07 00:54:15.400612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.362 00:54:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:00.295 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:00.295 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:00.295 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.295 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:00.295 00:54:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=316949 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:02.201 00:54:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:02.201 [global] 00:27:02.201 thread=1 00:27:02.201 invalidate=1 00:27:02.201 rw=write 00:27:02.201 time_based=1 00:27:02.201 runtime=60 00:27:02.201 ioengine=libaio 00:27:02.201 direct=1 00:27:02.201 bs=4096 00:27:02.201 iodepth=1 00:27:02.201 norandommap=0 00:27:02.201 numjobs=1 00:27:02.201 00:27:02.201 verify_dump=1 00:27:02.201 verify_backlog=512 00:27:02.201 verify_state_save=0 00:27:02.201 do_verify=1 00:27:02.201 verify=crc32c-intel 00:27:02.201 [job0] 00:27:02.201 filename=/dev/nvme0n1 00:27:02.201 Could not set queue depth (nvme0n1) 00:27:02.201 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:02.201 fio-3.35 00:27:02.201 Starting 1 thread 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.493 true 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.493 true 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.493 true 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.493 true 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.493 00:54:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.031 true 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.031 true 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.031 true 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.031 true 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:08.031 00:54:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 316949 00:28:04.259 00:28:04.259 job0: (groupid=0, jobs=1): err= 0: pid=317050: Sat Dec 7 00:55:18 2024 00:28:04.259 read: IOPS=116, BW=467KiB/s (479kB/s)(27.4MiB/60027msec) 00:28:04.259 slat (nsec): min=5114, max=64545, avg=11868.59, stdev=6905.72 00:28:04.259 clat (usec): min=202, max=41025k, avg=8320.92, stdev=489914.90 00:28:04.259 lat (usec): min=213, max=41025k, avg=8332.79, stdev=489914.95 00:28:04.259 clat percentiles (usec): 00:28:04.259 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 00:28:04.259 | 20.00th=[ 241], 30.00th=[ 249], 40.00th=[ 258], 00:28:04.259 | 50.00th=[ 265], 60.00th=[ 273], 70.00th=[ 281], 00:28:04.259 | 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 41157], 00:28:04.259 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:28:04.259 | 99.95th=[ 41157], 99.99th=[17112761] 00:28:04.259 write: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60027msec); 0 zone resets 00:28:04.259 slat (usec): min=6, max=15731, avg=15.51, stdev=232.49 00:28:04.259 clat (usec): min=162, max=621, avg=198.24, stdev=30.00 00:28:04.259 lat (usec): min=170, max=15961, avg=213.75, stdev=235.72 00:28:04.259 clat percentiles (usec): 00:28:04.259 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:28:04.259 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:28:04.259 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 233], 95.00th=[ 258], 00:28:04.259 | 99.00th=[ 322], 99.50th=[ 367], 99.90th=[ 412], 99.95th=[ 420], 00:28:04.259 | 99.99th=[ 619] 00:28:04.259 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=7168.00, stdev=1896.08, samples=8 00:28:04.259 iops : min= 1024, max= 2048, avg=1792.00, stdev=474.02, samples=8 00:28:04.259 lat (usec) : 250=63.52%, 500=33.59%, 750=0.20%, 1000=0.01% 00:28:04.259 lat (msec) : 50=2.68%, >=2000=0.01% 00:28:04.259 cpu : usr=0.17%, sys=0.34%, ctx=14184, majf=0, minf=1 00:28:04.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.259 issued rwts: total=7014,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:04.259 00:28:04.259 Run status group 0 (all jobs): 00:28:04.259 READ: bw=467KiB/s (479kB/s), 467KiB/s-467KiB/s (479kB/s-479kB/s), io=27.4MiB (28.7MB), run=60027-60027msec 00:28:04.259 WRITE: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60027-60027msec 00:28:04.259 00:28:04.259 Disk stats (read/write): 00:28:04.259 nvme0n1: ios=7109/7168, merge=0/0, ticks=18180/1356, in_queue=19536, util=99.71% 00:28:04.259 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:04.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:04.260 nvmf hotplug test: fio successful as expected 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.260 rmmod nvme_tcp 00:28:04.260 rmmod nvme_fabrics 00:28:04.260 rmmod nvme_keyring 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 316552 ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 316552 ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316552' 00:28:04.260 killing process with pid 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 316552 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.260 00:55:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.200 00:28:05.200 real 1m8.532s 00:28:05.200 user 4m11.315s 00:28:05.200 sys 0m6.888s 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.200 ************************************ 00:28:05.200 END TEST nvmf_initiator_timeout 00:28:05.200 ************************************ 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.200 00:55:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.108 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:07.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:07.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:07.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:07.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:07.109 ************************************ 00:28:07.109 START TEST nvmf_perf_adq 00:28:07.109 ************************************ 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:07.109 * Looking for test storage... 00:28:07.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.109 --rc genhtml_branch_coverage=1 00:28:07.109 --rc genhtml_function_coverage=1 00:28:07.109 --rc genhtml_legend=1 00:28:07.109 --rc geninfo_all_blocks=1 00:28:07.109 --rc geninfo_unexecuted_blocks=1 00:28:07.109 00:28:07.109 ' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.109 --rc genhtml_branch_coverage=1 00:28:07.109 --rc genhtml_function_coverage=1 00:28:07.109 --rc genhtml_legend=1 00:28:07.109 --rc geninfo_all_blocks=1 00:28:07.109 --rc geninfo_unexecuted_blocks=1 00:28:07.109 00:28:07.109 ' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.109 --rc genhtml_branch_coverage=1 00:28:07.109 --rc genhtml_function_coverage=1 00:28:07.109 --rc genhtml_legend=1 00:28:07.109 --rc geninfo_all_blocks=1 00:28:07.109 --rc geninfo_unexecuted_blocks=1 00:28:07.109 00:28:07.109 ' 00:28:07.109 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.110 --rc genhtml_branch_coverage=1 00:28:07.110 --rc genhtml_function_coverage=1 00:28:07.110 --rc genhtml_legend=1 00:28:07.110 --rc geninfo_all_blocks=1 00:28:07.110 --rc geninfo_unexecuted_blocks=1 00:28:07.110 00:28:07.110 ' 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.110 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.368 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:07.368 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.368 00:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.897 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:09.898 00:55:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:10.155 00:55:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:14.358 00:55:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.636 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.637 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:28:19.638 00:28:19.638 --- 10.0.0.2 ping statistics --- 00:28:19.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.638 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:19.638 00:28:19.638 --- 10.0.0.1 ping statistics --- 00:28:19.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.638 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=328829 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 328829 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 328829 ']' 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.638 00:55:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 [2024-12-07 00:55:35.027731] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:28:19.638 [2024-12-07 00:55:35.027821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.638 [2024-12-07 00:55:35.100794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.638 [2024-12-07 00:55:35.147636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.638 [2024-12-07 00:55:35.147684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.638 [2024-12-07 00:55:35.147704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.638 [2024-12-07 00:55:35.147715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.638 [2024-12-07 00:55:35.147724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.638 [2024-12-07 00:55:35.149139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.638 [2024-12-07 00:55:35.149195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.638 [2024-12-07 00:55:35.149260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.638 [2024-12-07 00:55:35.149263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 [2024-12-07 00:55:35.423403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 Malloc1 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.638 [2024-12-07 00:55:35.492091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=328860 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:19.638 00:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:21.543 "tick_rate": 2700000000, 00:28:21.543 "poll_groups": [ 00:28:21.543 { 00:28:21.543 "name": "nvmf_tgt_poll_group_000", 00:28:21.543 "admin_qpairs": 1, 00:28:21.543 "io_qpairs": 1, 00:28:21.543 "current_admin_qpairs": 1, 00:28:21.543 "current_io_qpairs": 1, 00:28:21.543 "pending_bdev_io": 0, 00:28:21.543 "completed_nvme_io": 19699, 00:28:21.543 "transports": [ 00:28:21.543 { 00:28:21.543 "trtype": "TCP" 00:28:21.543 } 00:28:21.543 ] 00:28:21.543 }, 00:28:21.543 { 00:28:21.543 "name": "nvmf_tgt_poll_group_001", 00:28:21.543 "admin_qpairs": 0, 00:28:21.543 "io_qpairs": 1, 00:28:21.543 "current_admin_qpairs": 0, 00:28:21.543 "current_io_qpairs": 1, 00:28:21.543 "pending_bdev_io": 0, 00:28:21.543 "completed_nvme_io": 20115, 00:28:21.543 "transports": [ 00:28:21.543 { 00:28:21.543 "trtype": "TCP" 00:28:21.543 } 00:28:21.543 ] 00:28:21.543 }, 00:28:21.543 { 00:28:21.543 "name": "nvmf_tgt_poll_group_002", 00:28:21.543 "admin_qpairs": 0, 00:28:21.543 "io_qpairs": 1, 00:28:21.543 "current_admin_qpairs": 0, 00:28:21.543 "current_io_qpairs": 1, 00:28:21.543 "pending_bdev_io": 0, 00:28:21.543 "completed_nvme_io": 19945, 00:28:21.543 "transports": [ 00:28:21.543 { 00:28:21.543 "trtype": "TCP" 00:28:21.543 } 00:28:21.543 ] 00:28:21.543 }, 00:28:21.543 { 00:28:21.543 "name": "nvmf_tgt_poll_group_003", 00:28:21.543 "admin_qpairs": 0, 00:28:21.543 "io_qpairs": 1, 00:28:21.543 "current_admin_qpairs": 0, 00:28:21.543 "current_io_qpairs": 1, 00:28:21.543 "pending_bdev_io": 0, 00:28:21.543 "completed_nvme_io": 19791, 00:28:21.543 "transports": [ 00:28:21.543 { 00:28:21.543 "trtype": "TCP" 00:28:21.543 } 00:28:21.543 ] 00:28:21.543 } 00:28:21.543 ] 00:28:21.543 }' 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:21.543 00:55:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 328860 00:28:29.656 Initializing NVMe Controllers 00:28:29.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:29.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:29.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:29.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:29.656 Initialization complete. Launching workers. 00:28:29.656 ======================================================== 00:28:29.656 Latency(us) 00:28:29.656 Device Information : IOPS MiB/s Average min max 00:28:29.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10327.42 40.34 6197.78 2418.41 10773.01 00:28:29.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10572.11 41.30 6055.00 2510.44 9845.27 00:28:29.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10498.12 41.01 6095.96 2459.70 9958.95 00:28:29.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10403.22 40.64 6153.87 2374.49 10555.19 00:28:29.656 ======================================================== 00:28:29.656 Total : 41800.87 163.28 6125.17 2374.49 10773.01 00:28:29.656 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.656 rmmod nvme_tcp 00:28:29.656 rmmod nvme_fabrics 00:28:29.656 rmmod nvme_keyring 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 328829 ']' 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 328829 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 328829 ']' 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 328829 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328829 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328829' 00:28:29.656 killing process with pid 328829 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 328829 00:28:29.656 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 328829 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.916 00:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.457 00:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.457 00:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:32.457 00:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:32.457 00:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:32.716 00:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:35.250 00:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.523 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.524 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:28:40.524 00:28:40.524 --- 10.0.0.2 ping statistics --- 00:28:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.525 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:40.525 00:28:40.525 --- 10.0.0.1 ping statistics --- 00:28:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.525 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:40.525 net.core.busy_poll = 1 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:40.525 net.core.busy_read = 1 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=331503 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 331503 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 331503 ']' 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.525 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.525 [2024-12-07 00:55:56.494412] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:28:40.525 [2024-12-07 00:55:56.494504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.525 [2024-12-07 00:55:56.570006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.525 [2024-12-07 00:55:56.618855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.525 [2024-12-07 00:55:56.618912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.525 [2024-12-07 00:55:56.618925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.525 [2024-12-07 00:55:56.618937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.525 [2024-12-07 00:55:56.618946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.525 [2024-12-07 00:55:56.620447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.525 [2024-12-07 00:55:56.620510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.525 [2024-12-07 00:55:56.620574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.525 [2024-12-07 00:55:56.620576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.783 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 [2024-12-07 00:55:56.974439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.041 00:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 Malloc1 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.041 [2024-12-07 00:55:57.038181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=331621 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:41.041 00:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:42.941 "tick_rate": 2700000000, 00:28:42.941 "poll_groups": [ 00:28:42.941 { 00:28:42.941 "name": "nvmf_tgt_poll_group_000", 00:28:42.941 "admin_qpairs": 1, 00:28:42.941 "io_qpairs": 2, 00:28:42.941 "current_admin_qpairs": 1, 00:28:42.941 "current_io_qpairs": 2, 00:28:42.941 "pending_bdev_io": 0, 00:28:42.941 "completed_nvme_io": 23445, 00:28:42.941 "transports": [ 00:28:42.941 { 00:28:42.941 "trtype": "TCP" 00:28:42.941 } 00:28:42.941 ] 00:28:42.941 }, 00:28:42.941 { 00:28:42.941 "name": "nvmf_tgt_poll_group_001", 00:28:42.941 "admin_qpairs": 0, 00:28:42.941 "io_qpairs": 2, 00:28:42.941 "current_admin_qpairs": 0, 00:28:42.941 "current_io_qpairs": 2, 00:28:42.941 "pending_bdev_io": 0, 00:28:42.941 "completed_nvme_io": 26196, 00:28:42.941 "transports": [ 00:28:42.941 { 00:28:42.941 "trtype": "TCP" 00:28:42.941 } 00:28:42.941 ] 00:28:42.941 }, 00:28:42.941 { 00:28:42.941 "name": "nvmf_tgt_poll_group_002", 00:28:42.941 "admin_qpairs": 0, 00:28:42.941 "io_qpairs": 0, 00:28:42.941 "current_admin_qpairs": 0, 00:28:42.941 "current_io_qpairs": 0, 00:28:42.941 "pending_bdev_io": 0, 00:28:42.941 "completed_nvme_io": 0, 00:28:42.941 "transports": [ 00:28:42.941 { 00:28:42.941 "trtype": "TCP" 00:28:42.941 } 00:28:42.941 ] 00:28:42.941 }, 00:28:42.941 { 00:28:42.941 "name": "nvmf_tgt_poll_group_003", 00:28:42.941 "admin_qpairs": 0, 00:28:42.941 "io_qpairs": 0, 00:28:42.941 "current_admin_qpairs": 0, 00:28:42.941 "current_io_qpairs": 0, 00:28:42.941 "pending_bdev_io": 0, 00:28:42.941 "completed_nvme_io": 0, 00:28:42.941 "transports": [ 00:28:42.941 { 00:28:42.941 "trtype": "TCP" 00:28:42.941 } 00:28:42.941 ] 00:28:42.941 } 00:28:42.941 ] 00:28:42.941 }' 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:42.941 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:43.199 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:43.199 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:43.199 00:55:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 331621 00:28:51.302 Initializing NVMe Controllers 00:28:51.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:51.302 Initialization complete. Launching workers. 00:28:51.302 ======================================================== 00:28:51.302 Latency(us) 00:28:51.302 Device Information : IOPS MiB/s Average min max 00:28:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7274.30 28.42 8813.53 1589.43 55065.43 00:28:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5811.70 22.70 11012.78 1717.12 53503.95 00:28:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6568.30 25.66 9747.34 1302.51 53659.49 00:28:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6860.10 26.80 9330.02 1889.67 54000.31 00:28:51.302 ======================================================== 00:28:51.302 Total : 26514.39 103.57 9660.54 1302.51 55065.43 00:28:51.302 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.302 rmmod nvme_tcp 00:28:51.302 rmmod nvme_fabrics 00:28:51.302 rmmod nvme_keyring 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 331503 ']' 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 331503 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 331503 ']' 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 331503 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331503 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331503' 00:28:51.302 killing process with pid 331503 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 331503 00:28:51.302 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 331503 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.560 00:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:53.464 00:28:53.464 real 0m46.466s 00:28:53.464 user 2m41.155s 00:28:53.464 sys 0m10.332s 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.464 ************************************ 00:28:53.464 END TEST nvmf_perf_adq 00:28:53.464 ************************************ 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:53.464 ************************************ 00:28:53.464 START TEST nvmf_shutdown 00:28:53.464 ************************************ 00:28:53.464 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:53.723 * Looking for test storage... 00:28:53.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.723 --rc genhtml_branch_coverage=1 00:28:53.723 --rc genhtml_function_coverage=1 00:28:53.723 --rc genhtml_legend=1 00:28:53.723 --rc geninfo_all_blocks=1 00:28:53.723 --rc geninfo_unexecuted_blocks=1 00:28:53.723 00:28:53.723 ' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.723 --rc genhtml_branch_coverage=1 00:28:53.723 --rc genhtml_function_coverage=1 00:28:53.723 --rc genhtml_legend=1 00:28:53.723 --rc geninfo_all_blocks=1 00:28:53.723 --rc geninfo_unexecuted_blocks=1 00:28:53.723 00:28:53.723 ' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.723 --rc genhtml_branch_coverage=1 00:28:53.723 --rc genhtml_function_coverage=1 00:28:53.723 --rc genhtml_legend=1 00:28:53.723 --rc geninfo_all_blocks=1 00:28:53.723 --rc geninfo_unexecuted_blocks=1 00:28:53.723 00:28:53.723 ' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.723 --rc genhtml_branch_coverage=1 00:28:53.723 --rc genhtml_function_coverage=1 00:28:53.723 --rc genhtml_legend=1 00:28:53.723 --rc geninfo_all_blocks=1 00:28:53.723 --rc geninfo_unexecuted_blocks=1 00:28:53.723 00:28:53.723 ' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.723 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:53.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:53.724 ************************************ 00:28:53.724 START TEST nvmf_shutdown_tc1 00:28:53.724 ************************************ 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.724 00:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:56.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:56.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.310 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:56.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:56.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.311 00:56:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:28:56.311 00:28:56.311 --- 10.0.0.2 ping statistics --- 00:28:56.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.311 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:28:56.311 00:28:56.311 --- 10.0.0.1 ping statistics --- 00:28:56.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.311 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=334797 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 334797 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 334797 ']' 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.311 [2024-12-07 00:56:12.180900] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:28:56.311 [2024-12-07 00:56:12.180981] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.311 [2024-12-07 00:56:12.255674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.311 [2024-12-07 00:56:12.305209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.311 [2024-12-07 00:56:12.305280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.311 [2024-12-07 00:56:12.305294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.311 [2024-12-07 00:56:12.305305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.311 [2024-12-07 00:56:12.305315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.311 [2024-12-07 00:56:12.307086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.311 [2024-12-07 00:56:12.307114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.311 [2024-12-07 00:56:12.307173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.311 [2024-12-07 00:56:12.307176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.311 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.312 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.312 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.312 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.590 [2024-12-07 00:56:12.456206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.590 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.591 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.591 Malloc1 00:28:56.591 [2024-12-07 00:56:12.551504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.591 Malloc2 00:28:56.591 Malloc3 00:28:56.591 Malloc4 00:28:56.591 Malloc5 00:28:56.867 Malloc6 00:28:56.867 Malloc7 00:28:56.867 Malloc8 00:28:56.867 Malloc9 00:28:56.867 Malloc10 00:28:56.867 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.867 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:56.867 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.867 00:56:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=334977 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 334977 /var/tmp/bdevperf.sock 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 334977 ']' 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.173 { 00:28:57.173 "params": { 00:28:57.173 "name": "Nvme$subsystem", 00:28:57.173 "trtype": "$TEST_TRANSPORT", 00:28:57.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.173 "adrfam": "ipv4", 00:28:57.173 "trsvcid": "$NVMF_PORT", 00:28:57.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.173 "hdgst": ${hdgst:-false}, 00:28:57.173 "ddgst": ${ddgst:-false} 00:28:57.173 }, 00:28:57.173 "method": "bdev_nvme_attach_controller" 00:28:57.173 } 00:28:57.173 EOF 00:28:57.173 )") 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.173 { 00:28:57.173 "params": { 00:28:57.173 "name": "Nvme$subsystem", 00:28:57.173 "trtype": "$TEST_TRANSPORT", 00:28:57.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.173 "adrfam": "ipv4", 00:28:57.173 "trsvcid": "$NVMF_PORT", 00:28:57.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.173 "hdgst": ${hdgst:-false}, 00:28:57.173 "ddgst": ${ddgst:-false} 00:28:57.173 }, 00:28:57.173 "method": "bdev_nvme_attach_controller" 00:28:57.173 } 00:28:57.173 EOF 00:28:57.173 )") 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.173 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.173 { 00:28:57.173 "params": { 00:28:57.173 "name": "Nvme$subsystem", 00:28:57.173 "trtype": "$TEST_TRANSPORT", 00:28:57.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:57.174 { 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme$subsystem", 00:28:57.174 "trtype": "$TEST_TRANSPORT", 00:28:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "$NVMF_PORT", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.174 "hdgst": ${hdgst:-false}, 00:28:57.174 "ddgst": ${ddgst:-false} 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 } 00:28:57.174 EOF 00:28:57.174 )") 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:57.174 00:56:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme1", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.174 "hdgst": false, 00:28:57.174 "ddgst": false 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 },{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme2", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:57.174 "hdgst": false, 00:28:57.174 "ddgst": false 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 },{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme3", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:57.174 "hdgst": false, 00:28:57.174 "ddgst": false 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 },{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme4", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:57.174 "hdgst": false, 00:28:57.174 "ddgst": false 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 },{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme5", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:57.174 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:57.174 "hdgst": false, 00:28:57.174 "ddgst": false 00:28:57.174 }, 00:28:57.174 "method": "bdev_nvme_attach_controller" 00:28:57.174 },{ 00:28:57.174 "params": { 00:28:57.174 "name": "Nvme6", 00:28:57.174 "trtype": "tcp", 00:28:57.174 "traddr": "10.0.0.2", 00:28:57.174 "adrfam": "ipv4", 00:28:57.174 "trsvcid": "4420", 00:28:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:57.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:57.175 "hdgst": false, 00:28:57.175 "ddgst": false 00:28:57.175 }, 00:28:57.175 "method": "bdev_nvme_attach_controller" 00:28:57.175 },{ 00:28:57.175 "params": { 00:28:57.175 "name": "Nvme7", 00:28:57.175 "trtype": "tcp", 00:28:57.175 "traddr": "10.0.0.2", 00:28:57.175 "adrfam": "ipv4", 00:28:57.175 "trsvcid": "4420", 00:28:57.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:57.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:57.175 "hdgst": false, 00:28:57.175 "ddgst": false 00:28:57.175 }, 00:28:57.175 "method": "bdev_nvme_attach_controller" 00:28:57.175 },{ 00:28:57.175 "params": { 00:28:57.175 "name": "Nvme8", 00:28:57.175 "trtype": "tcp", 00:28:57.175 "traddr": "10.0.0.2", 00:28:57.175 "adrfam": "ipv4", 00:28:57.175 "trsvcid": "4420", 00:28:57.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:57.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:57.175 "hdgst": false, 00:28:57.175 "ddgst": false 00:28:57.175 }, 00:28:57.175 "method": "bdev_nvme_attach_controller" 00:28:57.175 },{ 00:28:57.175 "params": { 00:28:57.175 "name": "Nvme9", 00:28:57.175 "trtype": "tcp", 00:28:57.175 "traddr": "10.0.0.2", 00:28:57.175 "adrfam": "ipv4", 00:28:57.175 "trsvcid": "4420", 00:28:57.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:57.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:57.175 "hdgst": false, 00:28:57.175 "ddgst": false 00:28:57.175 }, 00:28:57.175 "method": "bdev_nvme_attach_controller" 00:28:57.175 },{ 00:28:57.175 "params": { 00:28:57.175 "name": "Nvme10", 00:28:57.175 "trtype": "tcp", 00:28:57.175 "traddr": "10.0.0.2", 00:28:57.175 "adrfam": "ipv4", 00:28:57.175 "trsvcid": "4420", 00:28:57.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:57.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:57.175 "hdgst": false, 00:28:57.175 "ddgst": false 00:28:57.175 }, 00:28:57.175 "method": "bdev_nvme_attach_controller" 00:28:57.175 }' 00:28:57.175 [2024-12-07 00:56:13.060656] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:28:57.175 [2024-12-07 00:56:13.060747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:57.175 [2024-12-07 00:56:13.135396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.175 [2024-12-07 00:56:13.182865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.097 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.098 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 334977 00:28:59.098 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:59.098 00:56:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:00.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 334977 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 334797 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.031 { 00:29:00.031 "params": { 00:29:00.031 "name": "Nvme$subsystem", 00:29:00.031 "trtype": "$TEST_TRANSPORT", 00:29:00.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.031 "adrfam": "ipv4", 00:29:00.031 "trsvcid": "$NVMF_PORT", 00:29:00.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.031 "hdgst": ${hdgst:-false}, 00:29:00.031 "ddgst": ${ddgst:-false} 00:29:00.031 }, 00:29:00.031 "method": "bdev_nvme_attach_controller" 00:29:00.031 } 00:29:00.031 EOF 00:29:00.031 )") 00:29:00.031 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.032 { 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme$subsystem", 00:29:00.032 "trtype": "$TEST_TRANSPORT", 00:29:00.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "$NVMF_PORT", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.032 "hdgst": ${hdgst:-false}, 00:29:00.032 "ddgst": ${ddgst:-false} 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 } 00:29:00.032 EOF 00:29:00.032 )") 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:00.032 00:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme1", 00:29:00.032 "trtype": "tcp", 00:29:00.032 "traddr": "10.0.0.2", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "4420", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.032 "hdgst": false, 00:29:00.032 "ddgst": false 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 },{ 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme2", 00:29:00.032 "trtype": "tcp", 00:29:00.032 "traddr": "10.0.0.2", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "4420", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.032 "hdgst": false, 00:29:00.032 "ddgst": false 00:29:00.032 }, 00:29:00.032 "method": "bdev_nvme_attach_controller" 00:29:00.032 },{ 00:29:00.032 "params": { 00:29:00.032 "name": "Nvme3", 00:29:00.032 "trtype": "tcp", 00:29:00.032 "traddr": "10.0.0.2", 00:29:00.032 "adrfam": "ipv4", 00:29:00.032 "trsvcid": "4420", 00:29:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.032 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.032 "hdgst": false, 00:29:00.032 "ddgst": false 00:29:00.032 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme4", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme5", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme6", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme7", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme8", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme9", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 },{ 00:29:00.033 "params": { 00:29:00.033 "name": "Nvme10", 00:29:00.033 "trtype": "tcp", 00:29:00.033 "traddr": "10.0.0.2", 00:29:00.033 "adrfam": "ipv4", 00:29:00.033 "trsvcid": "4420", 00:29:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.033 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.033 "hdgst": false, 00:29:00.033 "ddgst": false 00:29:00.033 }, 00:29:00.033 "method": "bdev_nvme_attach_controller" 00:29:00.033 }' 00:29:00.033 [2024-12-07 00:56:16.125611] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:00.033 [2024-12-07 00:56:16.125698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335401 ] 00:29:00.291 [2024-12-07 00:56:16.199762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.291 [2024-12-07 00:56:16.249085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.668 Running I/O for 1 seconds... 00:29:02.865 1808.00 IOPS, 113.00 MiB/s 00:29:02.865 Latency(us) 00:29:02.865 [2024-12-06T23:56:19.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.865 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme1n1 : 1.11 235.15 14.70 0.00 0.00 267547.08 12913.02 231463.44 00:29:02.865 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme2n1 : 1.10 233.07 14.57 0.00 0.00 266693.97 19418.07 256318.58 00:29:02.865 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme3n1 : 1.08 236.28 14.77 0.00 0.00 258614.99 17864.63 262532.36 00:29:02.865 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme4n1 : 1.10 238.82 14.93 0.00 0.00 249673.89 9320.68 217482.43 00:29:02.865 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme5n1 : 1.18 217.21 13.58 0.00 0.00 272862.63 21359.88 288940.94 00:29:02.865 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme6n1 : 1.14 229.40 14.34 0.00 0.00 252293.37 4247.70 260978.92 00:29:02.865 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme7n1 : 1.18 270.37 16.90 0.00 0.00 212489.10 15825.73 233016.89 00:29:02.865 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme8n1 : 1.17 223.95 14.00 0.00 0.00 251253.37 4344.79 259425.47 00:29:02.865 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.865 Nvme9n1 : 1.19 269.40 16.84 0.00 0.00 206037.90 12621.75 260978.92 00:29:02.865 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.865 Verification LBA range: start 0x0 length 0x400 00:29:02.866 Nvme10n1 : 1.17 218.30 13.64 0.00 0.00 249684.76 21068.61 271853.04 00:29:02.866 [2024-12-06T23:56:19.017Z] =================================================================================================================== 00:29:02.866 [2024-12-06T23:56:19.017Z] Total : 2371.94 148.25 0.00 0.00 246892.60 4247.70 288940.94 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.125 rmmod nvme_tcp 00:29:03.125 rmmod nvme_fabrics 00:29:03.125 rmmod nvme_keyring 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 334797 ']' 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 334797 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 334797 ']' 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 334797 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334797 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334797' 00:29:03.125 killing process with pid 334797 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 334797 00:29:03.125 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 334797 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.695 00:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.230 00:29:06.230 real 0m11.975s 00:29:06.230 user 0m34.510s 00:29:06.230 sys 0m3.284s 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.230 ************************************ 00:29:06.230 END TEST nvmf_shutdown_tc1 00:29:06.230 ************************************ 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:06.230 ************************************ 00:29:06.230 START TEST nvmf_shutdown_tc2 00:29:06.230 ************************************ 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.230 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:06.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:06.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:06.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:06.231 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:29:06.231 00:29:06.231 --- 10.0.0.2 ping statistics --- 00:29:06.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.231 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:06.231 00:29:06.231 --- 10.0.0.1 ping statistics --- 00:29:06.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.231 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.231 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=336165 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 336165 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 336165 ']' 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.232 00:56:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 [2024-12-07 00:56:22.043482] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:06.232 [2024-12-07 00:56:22.043591] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.232 [2024-12-07 00:56:22.115956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.232 [2024-12-07 00:56:22.159552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.232 [2024-12-07 00:56:22.159610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.232 [2024-12-07 00:56:22.159632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.232 [2024-12-07 00:56:22.159642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.232 [2024-12-07 00:56:22.159652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.232 [2024-12-07 00:56:22.161086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.232 [2024-12-07 00:56:22.161150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.232 [2024-12-07 00:56:22.161216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.232 [2024-12-07 00:56:22.161219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 [2024-12-07 00:56:22.303341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.232 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.232 Malloc1 00:29:06.491 [2024-12-07 00:56:22.398254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.491 Malloc2 00:29:06.491 Malloc3 00:29:06.491 Malloc4 00:29:06.491 Malloc5 00:29:06.491 Malloc6 00:29:06.750 Malloc7 00:29:06.750 Malloc8 00:29:06.750 Malloc9 00:29:06.750 Malloc10 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=336345 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 336345 /var/tmp/bdevperf.sock 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 336345 ']' 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.750 "ddgst": ${ddgst:-false} 00:29:06.750 }, 00:29:06.750 "method": "bdev_nvme_attach_controller" 00:29:06.750 } 00:29:06.750 EOF 00:29:06.750 )") 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.750 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.750 { 00:29:06.750 "params": { 00:29:06.750 "name": "Nvme$subsystem", 00:29:06.750 "trtype": "$TEST_TRANSPORT", 00:29:06.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.750 "adrfam": "ipv4", 00:29:06.750 "trsvcid": "$NVMF_PORT", 00:29:06.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.750 "hdgst": ${hdgst:-false}, 00:29:06.751 "ddgst": ${ddgst:-false} 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 } 00:29:06.751 EOF 00:29:06.751 )") 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.751 { 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme$subsystem", 00:29:06.751 "trtype": "$TEST_TRANSPORT", 00:29:06.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "$NVMF_PORT", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.751 "hdgst": ${hdgst:-false}, 00:29:06.751 "ddgst": ${ddgst:-false} 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 } 00:29:06.751 EOF 00:29:06.751 )") 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.751 { 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme$subsystem", 00:29:06.751 "trtype": "$TEST_TRANSPORT", 00:29:06.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "$NVMF_PORT", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.751 "hdgst": ${hdgst:-false}, 00:29:06.751 "ddgst": ${ddgst:-false} 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 } 00:29:06.751 EOF 00:29:06.751 )") 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:06.751 00:56:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme1", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme2", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme3", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme4", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme5", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme6", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme7", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme8", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme9", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 },{ 00:29:06.751 "params": { 00:29:06.751 "name": "Nvme10", 00:29:06.751 "trtype": "tcp", 00:29:06.751 "traddr": "10.0.0.2", 00:29:06.751 "adrfam": "ipv4", 00:29:06.751 "trsvcid": "4420", 00:29:06.751 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.751 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.751 "hdgst": false, 00:29:06.751 "ddgst": false 00:29:06.751 }, 00:29:06.751 "method": "bdev_nvme_attach_controller" 00:29:06.751 }' 00:29:07.011 [2024-12-07 00:56:22.900144] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:07.011 [2024-12-07 00:56:22.900224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336345 ] 00:29:07.011 [2024-12-07 00:56:22.974236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.011 [2024-12-07 00:56:23.021211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.392 Running I/O for 10 seconds... 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.960 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=82 00:29:08.961 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 82 -ge 100 ']' 00:29:08.961 00:56:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 336345 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 336345 ']' 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 336345 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336345 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336345' 00:29:09.220 killing process with pid 336345 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 336345 00:29:09.220 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 336345 00:29:09.478 1801.00 IOPS, 112.56 MiB/s [2024-12-06T23:56:25.629Z] Received shutdown signal, test time was about 1.057574 seconds 00:29:09.478 00:29:09.478 Latency(us) 00:29:09.478 [2024-12-06T23:56:25.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.478 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme1n1 : 1.04 250.48 15.66 0.00 0.00 251869.97 2682.12 254765.13 00:29:09.478 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme2n1 : 1.05 242.98 15.19 0.00 0.00 255122.77 18544.26 259425.47 00:29:09.478 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme3n1 : 1.04 246.91 15.43 0.00 0.00 247185.26 32039.82 240784.12 00:29:09.478 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme4n1 : 1.03 248.49 15.53 0.00 0.00 241123.93 18641.35 260978.92 00:29:09.478 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme5n1 : 1.06 242.25 15.14 0.00 0.00 241718.80 19320.98 256318.58 00:29:09.478 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme6n1 : 0.99 193.02 12.06 0.00 0.00 297475.29 21068.61 262532.36 00:29:09.478 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme7n1 : 1.06 242.46 15.15 0.00 0.00 233176.37 15922.82 254765.13 00:29:09.478 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme8n1 : 1.04 245.07 15.32 0.00 0.00 226402.61 28544.57 257872.02 00:29:09.478 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme9n1 : 1.02 187.81 11.74 0.00 0.00 288535.64 21554.06 259425.47 00:29:09.478 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.478 Verification LBA range: start 0x0 length 0x400 00:29:09.478 Nvme10n1 : 1.03 187.13 11.70 0.00 0.00 284076.06 21068.61 279620.27 00:29:09.478 [2024-12-06T23:56:25.629Z] =================================================================================================================== 00:29:09.478 [2024-12-06T23:56:25.629Z] Total : 2286.61 142.91 0.00 0.00 253959.37 2682.12 279620.27 00:29:09.738 00:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.672 rmmod nvme_tcp 00:29:10.672 rmmod nvme_fabrics 00:29:10.672 rmmod nvme_keyring 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 336165 ']' 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 336165 ']' 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336165' 00:29:10.672 killing process with pid 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 336165 00:29:10.672 00:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 336165 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.244 00:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.154 00:29:13.154 real 0m7.411s 00:29:13.154 user 0m22.294s 00:29:13.154 sys 0m1.440s 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.154 ************************************ 00:29:13.154 END TEST nvmf_shutdown_tc2 00:29:13.154 ************************************ 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.154 ************************************ 00:29:13.154 START TEST nvmf_shutdown_tc3 00:29:13.154 ************************************ 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:13.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:13.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:13.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.154 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:13.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.155 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:29:13.415 00:29:13.415 --- 10.0.0.2 ping statistics --- 00:29:13.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.415 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:13.415 00:29:13.415 --- 10.0.0.1 ping statistics --- 00:29:13.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.415 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.415 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=337239 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 337239 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 337239 ']' 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.416 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.677 [2024-12-07 00:56:29.611023] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:13.677 [2024-12-07 00:56:29.611132] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.677 [2024-12-07 00:56:29.684299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.677 [2024-12-07 00:56:29.730550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.677 [2024-12-07 00:56:29.730598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.677 [2024-12-07 00:56:29.730619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.677 [2024-12-07 00:56:29.730630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.677 [2024-12-07 00:56:29.730639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.677 [2024-12-07 00:56:29.732073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.677 [2024-12-07 00:56:29.732099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.677 [2024-12-07 00:56:29.732156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.677 [2024-12-07 00:56:29.732159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.937 [2024-12-07 00:56:29.877226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.937 00:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.937 Malloc1 00:29:13.937 [2024-12-07 00:56:29.981283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.937 Malloc2 00:29:13.937 Malloc3 00:29:14.195 Malloc4 00:29:14.195 Malloc5 00:29:14.195 Malloc6 00:29:14.195 Malloc7 00:29:14.195 Malloc8 00:29:14.455 Malloc9 00:29:14.455 Malloc10 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=337327 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 337327 /var/tmp/bdevperf.sock 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 337327 ']' 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.455 "trsvcid": "$NVMF_PORT", 00:29:14.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.455 "hdgst": ${hdgst:-false}, 00:29:14.455 "ddgst": ${ddgst:-false} 00:29:14.455 }, 00:29:14.455 "method": "bdev_nvme_attach_controller" 00:29:14.455 } 00:29:14.455 EOF 00:29:14.455 )") 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.455 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.455 { 00:29:14.455 "params": { 00:29:14.455 "name": "Nvme$subsystem", 00:29:14.455 "trtype": "$TEST_TRANSPORT", 00:29:14.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.455 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "$NVMF_PORT", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.456 "hdgst": ${hdgst:-false}, 00:29:14.456 "ddgst": ${ddgst:-false} 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 } 00:29:14.456 EOF 00:29:14.456 )") 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.456 { 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme$subsystem", 00:29:14.456 "trtype": "$TEST_TRANSPORT", 00:29:14.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "$NVMF_PORT", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.456 "hdgst": ${hdgst:-false}, 00:29:14.456 "ddgst": ${ddgst:-false} 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 } 00:29:14.456 EOF 00:29:14.456 )") 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.456 { 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme$subsystem", 00:29:14.456 "trtype": "$TEST_TRANSPORT", 00:29:14.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "$NVMF_PORT", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.456 "hdgst": ${hdgst:-false}, 00:29:14.456 "ddgst": ${ddgst:-false} 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 } 00:29:14.456 EOF 00:29:14.456 )") 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:14.456 00:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme1", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme2", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme3", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme4", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme5", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme6", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme7", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme8", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme9", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 },{ 00:29:14.456 "params": { 00:29:14.456 "name": "Nvme10", 00:29:14.456 "trtype": "tcp", 00:29:14.456 "traddr": "10.0.0.2", 00:29:14.456 "adrfam": "ipv4", 00:29:14.456 "trsvcid": "4420", 00:29:14.456 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:14.456 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:14.456 "hdgst": false, 00:29:14.456 "ddgst": false 00:29:14.456 }, 00:29:14.456 "method": "bdev_nvme_attach_controller" 00:29:14.456 }' 00:29:14.456 [2024-12-07 00:56:30.492664] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:14.456 [2024-12-07 00:56:30.492740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337327 ] 00:29:14.456 [2024-12-07 00:56:30.568477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.715 [2024-12-07 00:56:30.617572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.092 Running I/O for 10 seconds... 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:16.659 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:16.660 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 337239 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 337239 ']' 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 337239 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337239 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337239' 00:29:16.937 killing process with pid 337239 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 337239 00:29:16.937 00:56:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 337239 00:29:16.937 [2024-12-07 00:56:32.929771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.937 [2024-12-07 00:56:32.929891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.929969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.930717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7750 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.938 [2024-12-07 00:56:32.932296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.932970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa320 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.939 [2024-12-07 00:56:32.935505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.935987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7c40 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.936883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.936924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.936942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.936956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.936970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.936984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a980 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.937152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.940 [2024-12-07 00:56:32.937278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685630 is same with the state(6) to be set 00:29:16.940 [2024-12-07 00:56:32.937336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.940 [2024-12-07 00:56:32.937358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.941 [2024-12-07 00:56:32.937373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.941 [2024-12-07 00:56:32.937386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.941 [2024-12-07 00:56:32.937400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.941 [2024-12-07 00:56:32.937414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.941 [2024-12-07 00:56:32.937427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.941 [2024-12-07 00:56:32.937440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.941 [2024-12-07 00:56:32.937454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1691630 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.941 [2024-12-07 00:56:32.938929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.938941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.938953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8110 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.940983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.941850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8600 is same with the state(6) to be set 00:29:16.942 [2024-12-07 00:56:32.943117] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.942 [2024-12-07 00:56:32.943200] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.942 [2024-12-07 00:56:32.943277] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.942 [2024-12-07 00:56:32.943566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-12-07 00:56:32.943597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.942 [2024-12-07 00:56:32.943625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.942 [2024-12-07 00:56:32.943642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.943972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.943990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:1[2024-12-07 00:56:32.944163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 he state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:16.943 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.943 [2024-12-07 00:56:32.944281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.943 [2024-12-07 00:56:32.944291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.943 [2024-12-07 00:56:32.944304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1he state(6) to be set 00:29:16.944 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:16.944 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-12-07 00:56:32.944487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 he state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:16.944 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 00:56:32.944541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 he state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1[2024-12-07 00:56:32.944631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 he state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 00:56:32.944681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 he state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-12-07 00:56:32.944732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 he state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.944 [2024-12-07 00:56:32.944771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.944 [2024-12-07 00:56:32.944783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.944 [2024-12-07 00:56:32.944795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1he state(6) to be set 00:29:16.944 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.944809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.944822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.944835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-07 00:56:32.944848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 he state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.944874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.944886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-12-07 00:56:32.944898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 he state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with t[2024-12-07 00:56:32.944912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:16.945 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.944925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.944937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.944948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8fa0 is same with the state(6) to be set 00:29:16.945 [2024-12-07 00:56:32.944961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.944991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.945 [2024-12-07 00:56:32.945637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.945 [2024-12-07 00:56:32.945650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.946 [2024-12-07 00:56:32.945665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.946 [2024-12-07 00:56:32.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.946 [2024-12-07 00:56:32.945723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.946 [2024-12-07 00:56:32.946358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946861] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.946 [2024-12-07 00:56:32.946876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.946981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.947368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9470 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.948740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:16.946 [2024-12-07 00:56:32.948823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb720 (9): Bad file descriptor 00:29:16.946 [2024-12-07 00:56:32.948882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.946 [2024-12-07 00:56:32.948884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.946 [2024-12-07 00:56:32.948904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.946 [2024-12-07 00:56:32.948916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.948920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.948938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.948945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.948951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.948960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.948964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.948974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.948976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.948988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-07 00:56:32.948989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with tid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 he state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with t[2024-12-07 00:56:32.949024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:29:16.947 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e40 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a980 (9): Bad file descriptor 00:29:16.947 [2024-12-07 00:56:32.949102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-07 00:56:32.949185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with tid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 he state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with t[2024-12-07 00:56:32.949200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:29:16.947 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16911a0 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with t[2024-12-07 00:56:32.949355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:29:16.947 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.947 [2024-12-07 00:56:32.949440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.947 [2024-12-07 00:56:32.949453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16868b0 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.947 [2024-12-07 00:56:32.949491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-07 00:56:32.949540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 he state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.949580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-07 00:56:32.949606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 he state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with t[2024-12-07 00:56:32.949621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:29:16.948 id:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with t[2024-12-07 00:56:32.949636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:29:16.948 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.949649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159c610 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-07 00:56:32.949734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 he state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.949774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9e30 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.949807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.948 [2024-12-07 00:56:32.949825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.949839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168dea0 is same with the state(6) to be set 00:29:16.948 [2024-12-07 00:56:32.949867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1685630 (9): Bad file descriptor 00:29:16.948 [2024-12-07 00:56:32.949899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691630 (9): Bad file descriptor 00:29:16.948 [2024-12-07 00:56:32.950249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.948 [2024-12-07 00:56:32.950499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.948 [2024-12-07 00:56:32.950514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.950949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.950974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.949 [2024-12-07 00:56:32.951575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.949 [2024-12-07 00:56:32.951589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.951987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.950 [2024-12-07 00:56:32.952382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.950 [2024-12-07 00:56:32.952658] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.950 [2024-12-07 00:56:32.954288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:16.950 [2024-12-07 00:56:32.954325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af1e40 (9): Bad file descriptor 00:29:16.950 [2024-12-07 00:56:32.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.950 [2024-12-07 00:56:32.954516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abb720 with addr=10.0.0.2, port=4420 00:29:16.950 [2024-12-07 00:56:32.954534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb720 is same with the state(6) to be set 00:29:16.950 [2024-12-07 00:56:32.954639] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.950 [2024-12-07 00:56:32.954816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb720 (9): Bad file descriptor 00:29:16.950 [2024-12-07 00:56:32.954940] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.950 [2024-12-07 00:56:32.955389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.950 [2024-12-07 00:56:32.955418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af1e40 with addr=10.0.0.2, port=4420 00:29:16.950 [2024-12-07 00:56:32.955436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e40 is same with the state(6) to be set 00:29:16.950 [2024-12-07 00:56:32.955453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:16.950 [2024-12-07 00:56:32.955468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:16.950 [2024-12-07 00:56:32.955484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:16.950 [2024-12-07 00:56:32.955500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:16.950 [2024-12-07 00:56:32.955609] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.950 [2024-12-07 00:56:32.955649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af1e40 (9): Bad file descriptor 00:29:16.951 [2024-12-07 00:56:32.955735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:16.951 [2024-12-07 00:56:32.955756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:16.951 [2024-12-07 00:56:32.955771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:16.951 [2024-12-07 00:56:32.955784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:16.951 [2024-12-07 00:56:32.958821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.951 [2024-12-07 00:56:32.958859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.958882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.951 [2024-12-07 00:56:32.958897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.958915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.951 [2024-12-07 00:56:32.958928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.958942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.951 [2024-12-07 00:56:32.958956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.958969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f70 is same with the state(6) to be set 00:29:16.951 [2024-12-07 00:56:32.959024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16911a0 (9): Bad file descriptor 00:29:16.951 [2024-12-07 00:56:32.959070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16868b0 (9): Bad file descriptor 00:29:16.951 [2024-12-07 00:56:32.959103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159c610 (9): Bad file descriptor 00:29:16.951 [2024-12-07 00:56:32.959135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168dea0 (9): Bad file descriptor 00:29:16.951 [2024-12-07 00:56:32.959309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.959972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.959989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.960014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.960031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.960052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.960068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.960082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.960098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.951 [2024-12-07 00:56:32.960113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.951 [2024-12-07 00:56:32.960128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.960969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.960984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.952 [2024-12-07 00:56:32.961193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.952 [2024-12-07 00:56:32.961208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.961223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.961240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.961253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.961269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.961287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.961303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.961317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.961333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.961353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.961367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18954c0 is same with the state(6) to be set 00:29:16.953 [2024-12-07 00:56:32.962648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.962963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.962979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.953 [2024-12-07 00:56:32.963437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.953 [2024-12-07 00:56:32.963452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.963975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.963990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.954 [2024-12-07 00:56:32.964418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.954 [2024-12-07 00:56:32.964436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.964712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.964726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896530 is same with the state(6) to be set 00:29:16.955 [2024-12-07 00:56:32.966036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.955 [2024-12-07 00:56:32.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.955 [2024-12-07 00:56:32.966852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.966867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.966883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.966898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.966914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.966928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.966945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.966960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.966980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.956 [2024-12-07 00:56:32.967926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.956 [2024-12-07 00:56:32.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.967958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.967974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.967988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.968028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.968064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.968095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.968125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.968156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.968170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d1f90 is same with the state(6) to be set 00:29:16.957 [2024-12-07 00:56:32.969416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:16.957 [2024-12-07 00:56:32.969450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:16.957 [2024-12-07 00:56:32.969479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:16.957 [2024-12-07 00:56:32.969596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0f70 (9): Bad file descriptor 00:29:16.957 [2024-12-07 00:56:32.969951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.957 [2024-12-07 00:56:32.969983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1691630 with addr=10.0.0.2, port=4420 00:29:16.957 [2024-12-07 00:56:32.970012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1691630 is same with the state(6) to be set 00:29:16.957 [2024-12-07 00:56:32.970128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.957 [2024-12-07 00:56:32.970154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1685630 with addr=10.0.0.2, port=4420 00:29:16.957 [2024-12-07 00:56:32.970171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685630 is same with the state(6) to be set 00:29:16.957 [2024-12-07 00:56:32.970252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.957 [2024-12-07 00:56:32.970278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a980 with addr=10.0.0.2, port=4420 00:29:16.957 [2024-12-07 00:56:32.970300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a980 is same with the state(6) to be set 00:29:16.957 [2024-12-07 00:56:32.970893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.970916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.970938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.970954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.970970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.970987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.957 [2024-12-07 00:56:32.971606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.957 [2024-12-07 00:56:32.971621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.971961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.958 [2024-12-07 00:56:32.972642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.958 [2024-12-07 00:56:32.972657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.972943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.972957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a659a0 is same with the state(6) to be set 00:29:16.959 [2024-12-07 00:56:32.974247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.959 [2024-12-07 00:56:32.974815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.959 [2024-12-07 00:56:32.974831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.974849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.974867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.974882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.974898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.974912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.974928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.974943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.974959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.974990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.960 [2024-12-07 00:56:32.975836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.960 [2024-12-07 00:56:32.975852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.975866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.975896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.975912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.975926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.975957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.975972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.975987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.976010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.976026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.976051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.976069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.976086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.976101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.976117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.976132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.983745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.983779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.983813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.983845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.983878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.983894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a92860 is same with the state(6) to be set 00:29:16.961 [2024-12-07 00:56:32.985255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.961 [2024-12-07 00:56:32.985904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.961 [2024-12-07 00:56:32.985920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.985935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.985951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.985966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.985982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.962 [2024-12-07 00:56:32.986914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.962 [2024-12-07 00:56:32.986930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.986944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.986961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.986975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.986991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.987316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a94d00 is same with the state(6) to be set 00:29:16.963 [2024-12-07 00:56:32.988559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.988967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.988983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.963 [2024-12-07 00:56:32.989183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.963 [2024-12-07 00:56:32.989197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.989975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.989991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.990014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.990031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.990045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.990061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.964 [2024-12-07 00:56:32.990076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.964 [2024-12-07 00:56:32.990092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.965 [2024-12-07 00:56:32.990578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.965 [2024-12-07 00:56:32.990593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96030 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.992102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691630 (9): Bad file descriptor 00:29:16.965 [2024-12-07 00:56:32.992326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1685630 (9): Bad file descriptor 00:29:16.965 [2024-12-07 00:56:32.992346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a980 (9): Bad file descriptor 00:29:16.965 [2024-12-07 00:56:32.992413] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:16.965 [2024-12-07 00:56:32.992441] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:16.965 [2024-12-07 00:56:32.992462] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:16.965 [2024-12-07 00:56:32.992484] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:16.965 [2024-12-07 00:56:32.992596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:16.965 [2024-12-07 00:56:32.992816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.965 [2024-12-07 00:56:32.992846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abb720 with addr=10.0.0.2, port=4420 00:29:16.965 [2024-12-07 00:56:32.992863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb720 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.992952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.965 [2024-12-07 00:56:32.992979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af1e40 with addr=10.0.0.2, port=4420 00:29:16.965 [2024-12-07 00:56:32.993004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e40 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.965 [2024-12-07 00:56:32.993106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16911a0 with addr=10.0.0.2, port=4420 00:29:16.965 [2024-12-07 00:56:32.993121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16911a0 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.993214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.965 [2024-12-07 00:56:32.993240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168dea0 with addr=10.0.0.2, port=4420 00:29:16.965 [2024-12-07 00:56:32.993256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168dea0 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.993340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.965 [2024-12-07 00:56:32.993366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16868b0 with addr=10.0.0.2, port=4420 00:29:16.965 [2024-12-07 00:56:32.993383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16868b0 is same with the state(6) to be set 00:29:16.965 [2024-12-07 00:56:32.993400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:16.965 [2024-12-07 00:56:32.993413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:16.965 [2024-12-07 00:56:32.993435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:16.965 [2024-12-07 00:56:32.993452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:16.965 [2024-12-07 00:56:32.993468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:16.965 [2024-12-07 00:56:32.993481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:16.965 [2024-12-07 00:56:32.993495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:16.965 [2024-12-07 00:56:32.993507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:16.965 [2024-12-07 00:56:32.993521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:16.965 [2024-12-07 00:56:32.993533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:16.965 [2024-12-07 00:56:32.993546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:16.966 [2024-12-07 00:56:32.993559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:16.966 [2024-12-07 00:56:32.994661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.994972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.994987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.966 [2024-12-07 00:56:32.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.966 [2024-12-07 00:56:32.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.995971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.995987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.967 [2024-12-07 00:56:32.996714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.967 [2024-12-07 00:56:32.996729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a98600 is same with the state(6) to be set 00:29:16.967 task offset: 19584 on job bdev=Nvme5n1 fails 00:29:16.967 00:29:16.967 Latency(us) 00:29:16.967 [2024-12-06T23:56:33.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.968 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme1n1 ended in about 0.76 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme1n1 : 0.76 169.26 10.58 84.63 0.00 248725.81 19126.80 234570.33 00:29:16.968 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme2n1 ended in about 0.76 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme2n1 : 0.76 168.52 10.53 84.26 0.00 243701.63 18835.53 254765.13 00:29:16.968 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme3n1 ended in about 0.77 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme3n1 : 0.77 166.71 10.42 83.36 0.00 240268.83 30292.20 239230.67 00:29:16.968 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme4n1 ended in about 0.78 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme4n1 : 0.78 164.37 10.27 82.19 0.00 237772.29 18350.08 257872.02 00:29:16.968 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme5n1 ended in about 0.74 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme5n1 : 0.74 172.55 10.78 86.28 0.00 219389.03 3737.98 254765.13 00:29:16.968 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme6n1 ended in about 0.78 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme6n1 : 0.78 163.67 10.23 81.83 0.00 226649.63 22233.69 256318.58 00:29:16.968 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme7n1 ended in about 0.79 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme7n1 : 0.79 162.98 10.19 81.49 0.00 221637.34 19320.98 251658.24 00:29:16.968 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme8n1 ended in about 0.75 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme8n1 : 0.75 171.14 10.70 85.57 0.00 203221.71 4708.88 242337.56 00:29:16.968 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme9n1 ended in about 0.79 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme9n1 : 0.79 80.86 5.05 80.86 0.00 318054.40 20680.25 285834.05 00:29:16.968 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.968 Job: Nvme10n1 ended in about 0.76 seconds with error 00:29:16.968 Verification LBA range: start 0x0 length 0x400 00:29:16.968 Nvme10n1 : 0.76 83.88 5.24 83.88 0.00 295264.33 20874.43 271853.04 00:29:16.968 [2024-12-06T23:56:33.119Z] =================================================================================================================== 00:29:16.968 [2024-12-06T23:56:33.119Z] Total : 1503.95 94.00 834.34 0.00 241097.73 3737.98 285834.05 00:29:16.968 [2024-12-07 00:56:33.027902] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:16.968 [2024-12-07 00:56:33.028003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:16.968 [2024-12-07 00:56:33.028265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.968 [2024-12-07 00:56:33.028302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159c610 with addr=10.0.0.2, port=4420 00:29:16.968 [2024-12-07 00:56:33.028324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159c610 is same with the state(6) to be set 00:29:16.968 [2024-12-07 00:56:33.028353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb720 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af1e40 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16911a0 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168dea0 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16868b0 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.968 [2024-12-07 00:56:33.028780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af0f70 with addr=10.0.0.2, port=4420 00:29:16.968 [2024-12-07 00:56:33.028798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f70 is same with the state(6) to be set 00:29:16.968 [2024-12-07 00:56:33.028819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159c610 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.028839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.028853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.028871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.028888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.028904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.028917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.028932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.028954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.028970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.028983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.029006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.029022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.029044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.029057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.029069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.029082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.029096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.029108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.029120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.029133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.029220] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:16.968 [2024-12-07 00:56:33.029640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0f70 (9): Bad file descriptor 00:29:16.968 [2024-12-07 00:56:33.029668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:16.968 [2024-12-07 00:56:33.029683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:16.968 [2024-12-07 00:56:33.029697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:16.968 [2024-12-07 00:56:33.029711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:16.968 [2024-12-07 00:56:33.030060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:16.968 [2024-12-07 00:56:33.030088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:16.968 [2024-12-07 00:56:33.030108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:16.968 [2024-12-07 00:56:33.030125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:16.969 [2024-12-07 00:56:33.030143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:16.969 [2024-12-07 00:56:33.030160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:16.969 [2024-12-07 00:56:33.030216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.030235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.030249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.030262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.030307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:16.969 [2024-12-07 00:56:33.030330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:16.969 [2024-12-07 00:56:33.030456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.030485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a980 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.030502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a980 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.030578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.030603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1685630 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.030619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685630 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.030703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.030728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1691630 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.030745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1691630 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.030828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.030853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16868b0 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.030870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16868b0 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.030944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.030969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168dea0 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.030986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168dea0 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.031095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.031121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16911a0 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.031138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16911a0 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.031250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af1e40 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.031293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e40 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.031381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.969 [2024-12-07 00:56:33.031406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abb720 with addr=10.0.0.2, port=4420 00:29:16.969 [2024-12-07 00:56:33.031422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb720 is same with the state(6) to be set 00:29:16.969 [2024-12-07 00:56:33.031442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a980 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1685630 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1691630 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16868b0 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168dea0 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16911a0 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af1e40 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb720 (9): Bad file descriptor 00:29:16.969 [2024-12-07 00:56:33.031631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.031907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.031920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.031933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.031975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.032004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.032021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:16.969 [2024-12-07 00:56:33.032034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:16.969 [2024-12-07 00:56:33.032049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:16.969 [2024-12-07 00:56:33.032061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:16.969 [2024-12-07 00:56:33.032075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:16.970 [2024-12-07 00:56:33.032088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:17.539 00:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 337327 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 337327 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.478 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 337327 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.479 rmmod nvme_tcp 00:29:18.479 rmmod nvme_fabrics 00:29:18.479 rmmod nvme_keyring 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 337239 ']' 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 337239 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 337239 ']' 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 337239 00:29:18.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (337239) - No such process 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 337239 is not found' 00:29:18.479 Process with pid 337239 is not found 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.479 00:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.014 00:29:21.014 real 0m7.312s 00:29:21.014 user 0m17.426s 00:29:21.014 sys 0m1.314s 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 ************************************ 00:29:21.014 END TEST nvmf_shutdown_tc3 00:29:21.014 ************************************ 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 ************************************ 00:29:21.014 START TEST nvmf_shutdown_tc4 00:29:21.014 ************************************ 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.014 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:21.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:21.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:21.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:21.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:29:21.015 00:29:21.015 --- 10.0.0.2 ping statistics --- 00:29:21.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.015 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:21.015 00:29:21.015 --- 10.0.0.1 ping statistics --- 00:29:21.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.015 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=338228 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 338228 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 338228 ']' 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.015 00:56:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 [2024-12-07 00:56:36.865744] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:21.015 [2024-12-07 00:56:36.865812] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.015 [2024-12-07 00:56:36.934828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.015 [2024-12-07 00:56:36.978009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.015 [2024-12-07 00:56:36.978064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.015 [2024-12-07 00:56:36.978086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.015 [2024-12-07 00:56:36.978097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.015 [2024-12-07 00:56:36.978106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.015 [2024-12-07 00:56:36.979505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.015 [2024-12-07 00:56:36.979568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.015 [2024-12-07 00:56:36.979634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:21.015 [2024-12-07 00:56:36.979637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 [2024-12-07 00:56:37.124315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.015 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.274 Malloc1 00:29:21.274 [2024-12-07 00:56:37.213854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.274 Malloc2 00:29:21.274 Malloc3 00:29:21.274 Malloc4 00:29:21.274 Malloc5 00:29:21.534 Malloc6 00:29:21.534 Malloc7 00:29:21.534 Malloc8 00:29:21.534 Malloc9 00:29:21.534 Malloc10 00:29:21.534 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.534 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:21.534 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.534 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.792 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=338400 00:29:21.792 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:21.792 00:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:21.793 [2024-12-07 00:56:37.749301] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 338228 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 338228 ']' 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 338228 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338228 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338228' 00:29:27.070 killing process with pid 338228 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 338228 00:29:27.070 00:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 338228 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 starting I/O failed: -6 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.070 starting I/O failed: -6 00:29:27.070 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 [2024-12-07 00:56:42.733150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.071 starting I/O failed: -6 00:29:27.071 [2024-12-07 00:56:42.733482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11961c0 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.733533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11961c0 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.733549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11961c0 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.733562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11961c0 is same with the state(6) to be set 00:29:27.071 starting I/O failed: -6 00:29:27.071 starting I/O failed: -6 00:29:27.071 starting I/O failed: -6 00:29:27.071 [2024-12-07 00:56:42.734081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 [2024-12-07 00:56:42.734208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195310 is same with the state(6) to be set 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 [2024-12-07 00:56:42.735030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 starting I/O failed: -6 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.071 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 [2024-12-07 00:56:42.736223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.072 NVMe io qpair process completion error 00:29:27.072 [2024-12-07 00:56:42.743789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198770 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.743844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198770 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.744009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11978e0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.744041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11978e0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.745915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199ab0 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.746603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.746633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.746658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 starting I/O failed: -6 00:29:27.072 [2024-12-07 00:56:42.746672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.746686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.746700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.746712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.746724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 [2024-12-07 00:56:42.746737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198c40 is same with the state(6) to be set 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 [2024-12-07 00:56:42.747472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 [2024-12-07 00:56:42.748543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 starting I/O failed: -6 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.072 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 [2024-12-07 00:56:42.749667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 [2024-12-07 00:56:42.751171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.073 NVMe io qpair process completion error 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 starting I/O failed: -6 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.073 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 [2024-12-07 00:56:42.752455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 [2024-12-07 00:56:42.753528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 [2024-12-07 00:56:42.754620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.074 Write completed with error (sct=0, sc=8) 00:29:27.074 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 [2024-12-07 00:56:42.756540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.075 NVMe io qpair process completion error 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 [2024-12-07 00:56:42.757731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 starting I/O failed: -6 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.075 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 [2024-12-07 00:56:42.758788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 [2024-12-07 00:56:42.759969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.076 starting I/O failed: -6 00:29:27.076 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 [2024-12-07 00:56:42.761901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.077 NVMe io qpair process completion error 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 [2024-12-07 00:56:42.763362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 [2024-12-07 00:56:42.764435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 Write completed with error (sct=0, sc=8) 00:29:27.077 starting I/O failed: -6 00:29:27.077 [2024-12-07 00:56:42.765566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 [2024-12-07 00:56:42.768846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.078 NVMe io qpair process completion error 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 [2024-12-07 00:56:42.770224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 Write completed with error (sct=0, sc=8) 00:29:27.078 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 [2024-12-07 00:56:42.771356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 [2024-12-07 00:56:42.772449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.079 starting I/O failed: -6 00:29:27.079 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 [2024-12-07 00:56:42.774924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.080 NVMe io qpair process completion error 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 [2024-12-07 00:56:42.776280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 [2024-12-07 00:56:42.777373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 starting I/O failed: -6 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.080 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 [2024-12-07 00:56:42.778478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 [2024-12-07 00:56:42.780220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.081 NVMe io qpair process completion error 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 starting I/O failed: -6 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.081 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 [2024-12-07 00:56:42.783232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.082 starting I/O failed: -6 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 [2024-12-07 00:56:42.784358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 [2024-12-07 00:56:42.785523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.082 Write completed with error (sct=0, sc=8) 00:29:27.082 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 [2024-12-07 00:56:42.787531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:27.083 NVMe io qpair process completion error 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.083 starting I/O failed: -6 00:29:27.083 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.084 starting I/O failed: -6 00:29:27.084 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.085 Write completed with error (sct=0, sc=8) 00:29:27.085 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 Write completed with error (sct=0, sc=8) 00:29:27.086 starting I/O failed: -6 00:29:27.086 [2024-12-07 00:56:42.798829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:27.086 NVMe io qpair process completion error 00:29:27.086 Initializing NVMe Controllers 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:27.086 Controller IO queue size 128, less than required. 00:29:27.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:27.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:27.086 Initialization complete. Launching workers. 00:29:27.086 ======================================================== 00:29:27.086 Latency(us) 00:29:27.086 Device Information : IOPS MiB/s Average min max 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1730.65 74.36 73984.35 1138.85 131348.76 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1816.76 78.06 70513.59 817.28 125608.31 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1810.84 77.81 70331.77 893.85 123937.56 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1752.65 75.31 73089.76 843.44 135245.11 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1751.17 75.25 73168.29 812.48 137651.62 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1798.57 77.28 71266.63 903.27 122957.44 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1736.99 74.64 73022.56 998.84 121236.91 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1796.45 77.19 70626.47 1095.14 122702.90 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1747.15 75.07 72645.91 1112.56 124901.78 00:29:27.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1743.13 74.90 72846.59 990.46 127384.93 00:29:27.086 ======================================================== 00:29:27.086 Total : 17684.36 759.87 72127.94 812.48 137651.62 00:29:27.086 00:29:27.086 [2024-12-07 00:56:42.805159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518e10 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15197a0 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151cb30 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1519140 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15176a0 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15179d0 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517190 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517370 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1519470 is same with the state(6) to be set 00:29:27.086 [2024-12-07 00:56:42.805762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516fb0 is same with the state(6) to be set 00:29:27.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:27.347 00:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 338400 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 338400 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 338400 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.286 rmmod nvme_tcp 00:29:28.286 rmmod nvme_fabrics 00:29:28.286 rmmod nvme_keyring 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 338228 ']' 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 338228 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 338228 ']' 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 338228 00:29:28.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (338228) - No such process 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 338228 is not found' 00:29:28.286 Process with pid 338228 is not found 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.286 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.287 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.287 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.287 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.287 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.287 00:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.196 00:29:30.196 real 0m9.685s 00:29:30.196 user 0m24.207s 00:29:30.196 sys 0m5.427s 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:30.196 ************************************ 00:29:30.196 END TEST nvmf_shutdown_tc4 00:29:30.196 ************************************ 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:30.196 00:29:30.196 real 0m36.744s 00:29:30.196 user 1m38.629s 00:29:30.196 sys 0m11.655s 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.196 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.196 ************************************ 00:29:30.196 END TEST nvmf_shutdown 00:29:30.196 ************************************ 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:30.456 ************************************ 00:29:30.456 START TEST nvmf_nsid 00:29:30.456 ************************************ 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:30.456 * Looking for test storage... 00:29:30.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:30.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.456 --rc genhtml_branch_coverage=1 00:29:30.456 --rc genhtml_function_coverage=1 00:29:30.456 --rc genhtml_legend=1 00:29:30.456 --rc geninfo_all_blocks=1 00:29:30.456 --rc geninfo_unexecuted_blocks=1 00:29:30.456 00:29:30.456 ' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:30.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.456 --rc genhtml_branch_coverage=1 00:29:30.456 --rc genhtml_function_coverage=1 00:29:30.456 --rc genhtml_legend=1 00:29:30.456 --rc geninfo_all_blocks=1 00:29:30.456 --rc geninfo_unexecuted_blocks=1 00:29:30.456 00:29:30.456 ' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:30.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.456 --rc genhtml_branch_coverage=1 00:29:30.456 --rc genhtml_function_coverage=1 00:29:30.456 --rc genhtml_legend=1 00:29:30.456 --rc geninfo_all_blocks=1 00:29:30.456 --rc geninfo_unexecuted_blocks=1 00:29:30.456 00:29:30.456 ' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:30.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.456 --rc genhtml_branch_coverage=1 00:29:30.456 --rc genhtml_function_coverage=1 00:29:30.456 --rc genhtml_legend=1 00:29:30.456 --rc geninfo_all_blocks=1 00:29:30.456 --rc geninfo_unexecuted_blocks=1 00:29:30.456 00:29:30.456 ' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:30.456 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.457 00:56:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:32.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:32.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:32.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:32.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.994 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:29:32.995 00:29:32.995 --- 10.0.0.2 ping statistics --- 00:29:32.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.995 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:32.995 00:29:32.995 --- 10.0.0.1 ping statistics --- 00:29:32.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.995 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=341040 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 341040 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 341040 ']' 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.995 00:56:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:32.995 [2024-12-07 00:56:48.945566] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:32.995 [2024-12-07 00:56:48.945652] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.995 [2024-12-07 00:56:49.028174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.995 [2024-12-07 00:56:49.075104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.995 [2024-12-07 00:56:49.075169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.995 [2024-12-07 00:56:49.075184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.995 [2024-12-07 00:56:49.075195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.995 [2024-12-07 00:56:49.075204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.995 [2024-12-07 00:56:49.075817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=341165 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=240a3c4a-6f81-4d97-a504-66121ccb9723 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fc51852e-5c01-4046-80c5-2b2c682a6952 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5ab2821c-3978-4ed8-b552-e0f0e17eee9b 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:33.254 null0 00:29:33.254 null1 00:29:33.254 null2 00:29:33.254 [2024-12-07 00:56:49.263147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.254 [2024-12-07 00:56:49.283186] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:33.254 [2024-12-07 00:56:49.283267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341165 ] 00:29:33.254 [2024-12-07 00:56:49.287370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 341165 /var/tmp/tgt2.sock 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 341165 ']' 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:33.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.254 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:33.254 [2024-12-07 00:56:49.357760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.254 [2024-12-07 00:56:49.403026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.512 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.513 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:33.513 00:56:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:34.080 [2024-12-07 00:56:50.062742] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.080 [2024-12-07 00:56:50.078887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:34.080 nvme0n1 nvme0n2 00:29:34.080 nvme1n1 00:29:34.080 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:34.080 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:34.080 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:34.646 00:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 240a3c4a-6f81-4d97-a504-66121ccb9723 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:35.585 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=240a3c4a6f814d97a50466121ccb9723 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 240A3C4A6F814D97A50466121CCB9723 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 240A3C4A6F814D97A50466121CCB9723 == \2\4\0\A\3\C\4\A\6\F\8\1\4\D\9\7\A\5\0\4\6\6\1\2\1\C\C\B\9\7\2\3 ]] 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fc51852e-5c01-4046-80c5-2b2c682a6952 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fc51852e5c01404680c52b2c682a6952 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FC51852E5C01404680C52B2C682A6952 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FC51852E5C01404680C52B2C682A6952 == \F\C\5\1\8\5\2\E\5\C\0\1\4\0\4\6\8\0\C\5\2\B\2\C\6\8\2\A\6\9\5\2 ]] 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5ab2821c-3978-4ed8-b552-e0f0e17eee9b 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5ab2821c39784ed8b552e0f0e17eee9b 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5AB2821C39784ED8B552E0F0E17EEE9B 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5AB2821C39784ED8B552E0F0E17EEE9B == \5\A\B\2\8\2\1\C\3\9\7\8\4\E\D\8\B\5\5\2\E\0\F\0\E\1\7\E\E\E\9\B ]] 00:29:35.843 00:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:36.102 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:36.102 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 341165 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 341165 ']' 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 341165 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341165 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341165' 00:29:36.103 killing process with pid 341165 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 341165 00:29:36.103 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 341165 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.362 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.621 rmmod nvme_tcp 00:29:36.621 rmmod nvme_fabrics 00:29:36.621 rmmod nvme_keyring 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 341040 ']' 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 341040 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 341040 ']' 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 341040 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341040 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341040' 00:29:36.621 killing process with pid 341040 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 341040 00:29:36.621 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 341040 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.881 00:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.788 00:29:38.788 real 0m8.456s 00:29:38.788 user 0m8.293s 00:29:38.788 sys 0m2.769s 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:38.788 ************************************ 00:29:38.788 END TEST nvmf_nsid 00:29:38.788 ************************************ 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:38.788 00:29:38.788 real 18m4.597s 00:29:38.788 user 50m11.863s 00:29:38.788 sys 3m55.023s 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.788 00:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:38.788 ************************************ 00:29:38.788 END TEST nvmf_target_extra 00:29:38.788 ************************************ 00:29:38.788 00:56:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:38.788 00:56:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.788 00:56:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.788 00:56:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.788 ************************************ 00:29:38.788 START TEST nvmf_host 00:29:38.788 ************************************ 00:29:38.788 00:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:39.068 * Looking for test storage... 00:29:39.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:39.068 00:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:39.068 00:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:39.068 00:56:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.068 --rc genhtml_branch_coverage=1 00:29:39.068 --rc genhtml_function_coverage=1 00:29:39.068 --rc genhtml_legend=1 00:29:39.068 --rc geninfo_all_blocks=1 00:29:39.068 --rc geninfo_unexecuted_blocks=1 00:29:39.068 00:29:39.068 ' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.068 --rc genhtml_branch_coverage=1 00:29:39.068 --rc genhtml_function_coverage=1 00:29:39.068 --rc genhtml_legend=1 00:29:39.068 --rc geninfo_all_blocks=1 00:29:39.068 --rc geninfo_unexecuted_blocks=1 00:29:39.068 00:29:39.068 ' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.068 --rc genhtml_branch_coverage=1 00:29:39.068 --rc genhtml_function_coverage=1 00:29:39.068 --rc genhtml_legend=1 00:29:39.068 --rc geninfo_all_blocks=1 00:29:39.068 --rc geninfo_unexecuted_blocks=1 00:29:39.068 00:29:39.068 ' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.068 --rc genhtml_branch_coverage=1 00:29:39.068 --rc genhtml_function_coverage=1 00:29:39.068 --rc genhtml_legend=1 00:29:39.068 --rc geninfo_all_blocks=1 00:29:39.068 --rc geninfo_unexecuted_blocks=1 00:29:39.068 00:29:39.068 ' 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.068 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.069 ************************************ 00:29:39.069 START TEST nvmf_multicontroller 00:29:39.069 ************************************ 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:39.069 * Looking for test storage... 00:29:39.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:39.069 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.327 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:39.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.328 --rc genhtml_branch_coverage=1 00:29:39.328 --rc genhtml_function_coverage=1 00:29:39.328 --rc genhtml_legend=1 00:29:39.328 --rc geninfo_all_blocks=1 00:29:39.328 --rc geninfo_unexecuted_blocks=1 00:29:39.328 00:29:39.328 ' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:39.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.328 --rc genhtml_branch_coverage=1 00:29:39.328 --rc genhtml_function_coverage=1 00:29:39.328 --rc genhtml_legend=1 00:29:39.328 --rc geninfo_all_blocks=1 00:29:39.328 --rc geninfo_unexecuted_blocks=1 00:29:39.328 00:29:39.328 ' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:39.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.328 --rc genhtml_branch_coverage=1 00:29:39.328 --rc genhtml_function_coverage=1 00:29:39.328 --rc genhtml_legend=1 00:29:39.328 --rc geninfo_all_blocks=1 00:29:39.328 --rc geninfo_unexecuted_blocks=1 00:29:39.328 00:29:39.328 ' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:39.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.328 --rc genhtml_branch_coverage=1 00:29:39.328 --rc genhtml_function_coverage=1 00:29:39.328 --rc genhtml_legend=1 00:29:39.328 --rc geninfo_all_blocks=1 00:29:39.328 --rc geninfo_unexecuted_blocks=1 00:29:39.328 00:29:39.328 ' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.328 00:56:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.864 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.864 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.864 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.864 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:29:41.865 00:29:41.865 --- 10.0.0.2 ping statistics --- 00:29:41.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.865 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:29:41.865 00:29:41.865 --- 10.0.0.1 ping statistics --- 00:29:41.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.865 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=343598 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 343598 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 343598 ']' 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.865 [2024-12-07 00:56:57.725805] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:41.865 [2024-12-07 00:56:57.725889] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.865 [2024-12-07 00:56:57.801876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.865 [2024-12-07 00:56:57.849682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.865 [2024-12-07 00:56:57.849735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.865 [2024-12-07 00:56:57.849763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.865 [2024-12-07 00:56:57.849774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.865 [2024-12-07 00:56:57.849784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.865 [2024-12-07 00:56:57.851298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.865 [2024-12-07 00:56:57.851416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.865 [2024-12-07 00:56:57.851419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.865 00:56:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.865 [2024-12-07 00:56:57.996853] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.865 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.865 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.865 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.865 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.123 Malloc0 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 [2024-12-07 00:56:58.062600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 [2024-12-07 00:56:58.070467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 Malloc1 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=343695 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 343695 /var/tmp/bdevperf.sock 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 343695 ']' 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.124 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 NVMe0n1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.383 1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 request: 00:29:42.383 { 00:29:42.383 "name": "NVMe0", 00:29:42.383 "trtype": "tcp", 00:29:42.383 "traddr": "10.0.0.2", 00:29:42.383 "adrfam": "ipv4", 00:29:42.383 "trsvcid": "4420", 00:29:42.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.383 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:42.383 "hostaddr": "10.0.0.1", 00:29:42.383 "prchk_reftag": false, 00:29:42.383 "prchk_guard": false, 00:29:42.383 "hdgst": false, 00:29:42.383 "ddgst": false, 00:29:42.383 "allow_unrecognized_csi": false, 00:29:42.383 "method": "bdev_nvme_attach_controller", 00:29:42.383 "req_id": 1 00:29:42.383 } 00:29:42.383 Got JSON-RPC error response 00:29:42.383 response: 00:29:42.383 { 00:29:42.383 "code": -114, 00:29:42.383 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:42.383 } 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.383 request: 00:29:42.383 { 00:29:42.383 "name": "NVMe0", 00:29:42.383 "trtype": "tcp", 00:29:42.383 "traddr": "10.0.0.2", 00:29:42.383 "adrfam": "ipv4", 00:29:42.383 "trsvcid": "4420", 00:29:42.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:42.383 "hostaddr": "10.0.0.1", 00:29:42.383 "prchk_reftag": false, 00:29:42.383 "prchk_guard": false, 00:29:42.383 "hdgst": false, 00:29:42.383 "ddgst": false, 00:29:42.383 "allow_unrecognized_csi": false, 00:29:42.383 "method": "bdev_nvme_attach_controller", 00:29:42.383 "req_id": 1 00:29:42.383 } 00:29:42.383 Got JSON-RPC error response 00:29:42.383 response: 00:29:42.383 { 00:29:42.383 "code": -114, 00:29:42.383 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:42.383 } 00:29:42.383 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:42.384 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.643 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.643 request: 00:29:42.643 { 00:29:42.643 "name": "NVMe0", 00:29:42.643 "trtype": "tcp", 00:29:42.644 "traddr": "10.0.0.2", 00:29:42.644 "adrfam": "ipv4", 00:29:42.644 "trsvcid": "4420", 00:29:42.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.644 "hostaddr": "10.0.0.1", 00:29:42.644 "prchk_reftag": false, 00:29:42.644 "prchk_guard": false, 00:29:42.644 "hdgst": false, 00:29:42.644 "ddgst": false, 00:29:42.644 "multipath": "disable", 00:29:42.644 "allow_unrecognized_csi": false, 00:29:42.644 "method": "bdev_nvme_attach_controller", 00:29:42.644 "req_id": 1 00:29:42.644 } 00:29:42.644 Got JSON-RPC error response 00:29:42.644 response: 00:29:42.644 { 00:29:42.644 "code": -114, 00:29:42.644 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:42.644 } 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.644 request: 00:29:42.644 { 00:29:42.644 "name": "NVMe0", 00:29:42.644 "trtype": "tcp", 00:29:42.644 "traddr": "10.0.0.2", 00:29:42.644 "adrfam": "ipv4", 00:29:42.644 "trsvcid": "4420", 00:29:42.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.644 "hostaddr": "10.0.0.1", 00:29:42.644 "prchk_reftag": false, 00:29:42.644 "prchk_guard": false, 00:29:42.644 "hdgst": false, 00:29:42.644 "ddgst": false, 00:29:42.644 "multipath": "failover", 00:29:42.644 "allow_unrecognized_csi": false, 00:29:42.644 "method": "bdev_nvme_attach_controller", 00:29:42.644 "req_id": 1 00:29:42.644 } 00:29:42.644 Got JSON-RPC error response 00:29:42.644 response: 00:29:42.644 { 00:29:42.644 "code": -114, 00:29:42.644 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:42.644 } 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.644 NVMe0n1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.644 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.904 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:42.904 00:56:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:44.281 { 00:29:44.282 "results": [ 00:29:44.282 { 00:29:44.282 "job": "NVMe0n1", 00:29:44.282 "core_mask": "0x1", 00:29:44.282 "workload": "write", 00:29:44.282 "status": "finished", 00:29:44.282 "queue_depth": 128, 00:29:44.282 "io_size": 4096, 00:29:44.282 "runtime": 1.00579, 00:29:44.282 "iops": 18184.710526054147, 00:29:44.282 "mibps": 71.03402549239901, 00:29:44.282 "io_failed": 0, 00:29:44.282 "io_timeout": 0, 00:29:44.282 "avg_latency_us": 7027.640150173136, 00:29:44.282 "min_latency_us": 6553.6, 00:29:44.282 "max_latency_us": 17573.357037037036 00:29:44.282 } 00:29:44.282 ], 00:29:44.282 "core_count": 1 00:29:44.282 } 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 343695 ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343695' 00:29:44.282 killing process with pid 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 343695 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:44.282 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:44.282 [2024-12-07 00:56:58.178216] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:44.282 [2024-12-07 00:56:58.178335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343695 ] 00:29:44.282 [2024-12-07 00:56:58.251446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.282 [2024-12-07 00:56:58.299130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.282 [2024-12-07 00:56:58.902421] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name e2ef631c-bb02-463d-9ec0-70405c6eaf7b already exists 00:29:44.282 [2024-12-07 00:56:58.902466] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:e2ef631c-bb02-463d-9ec0-70405c6eaf7b alias for bdev NVMe1n1 00:29:44.282 [2024-12-07 00:56:58.902497] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:44.282 Running I/O for 1 seconds... 00:29:44.282 18162.00 IOPS, 70.95 MiB/s 00:29:44.282 Latency(us) 00:29:44.282 [2024-12-06T23:57:00.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.282 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:44.282 NVMe0n1 : 1.01 18184.71 71.03 0.00 0.00 7027.64 6553.60 17573.36 00:29:44.282 [2024-12-06T23:57:00.433Z] =================================================================================================================== 00:29:44.282 [2024-12-06T23:57:00.433Z] Total : 18184.71 71.03 0.00 0.00 7027.64 6553.60 17573.36 00:29:44.282 Received shutdown signal, test time was about 1.000000 seconds 00:29:44.282 00:29:44.282 Latency(us) 00:29:44.282 [2024-12-06T23:57:00.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.282 [2024-12-06T23:57:00.433Z] =================================================================================================================== 00:29:44.282 [2024-12-06T23:57:00.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.282 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.282 rmmod nvme_tcp 00:29:44.282 rmmod nvme_fabrics 00:29:44.282 rmmod nvme_keyring 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 343598 ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 343598 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 343598 ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 343598 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343598 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343598' 00:29:44.282 killing process with pid 343598 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 343598 00:29:44.282 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 343598 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.541 00:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.082 00:29:47.082 real 0m7.549s 00:29:47.082 user 0m11.469s 00:29:47.082 sys 0m2.446s 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:47.082 ************************************ 00:29:47.082 END TEST nvmf_multicontroller 00:29:47.082 ************************************ 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.082 ************************************ 00:29:47.082 START TEST nvmf_aer 00:29:47.082 ************************************ 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:47.082 * Looking for test storage... 00:29:47.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.082 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.083 --rc genhtml_branch_coverage=1 00:29:47.083 --rc genhtml_function_coverage=1 00:29:47.083 --rc genhtml_legend=1 00:29:47.083 --rc geninfo_all_blocks=1 00:29:47.083 --rc geninfo_unexecuted_blocks=1 00:29:47.083 00:29:47.083 ' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.083 --rc genhtml_branch_coverage=1 00:29:47.083 --rc genhtml_function_coverage=1 00:29:47.083 --rc genhtml_legend=1 00:29:47.083 --rc geninfo_all_blocks=1 00:29:47.083 --rc geninfo_unexecuted_blocks=1 00:29:47.083 00:29:47.083 ' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.083 --rc genhtml_branch_coverage=1 00:29:47.083 --rc genhtml_function_coverage=1 00:29:47.083 --rc genhtml_legend=1 00:29:47.083 --rc geninfo_all_blocks=1 00:29:47.083 --rc geninfo_unexecuted_blocks=1 00:29:47.083 00:29:47.083 ' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.083 --rc genhtml_branch_coverage=1 00:29:47.083 --rc genhtml_function_coverage=1 00:29:47.083 --rc genhtml_legend=1 00:29:47.083 --rc geninfo_all_blocks=1 00:29:47.083 --rc geninfo_unexecuted_blocks=1 00:29:47.083 00:29:47.083 ' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.083 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:47.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.084 00:57:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.991 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.991 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:49.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:29:49.250 00:29:49.250 --- 10.0.0.2 ping statistics --- 00:29:49.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.250 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:29:49.250 00:29:49.250 --- 10.0.0.1 ping statistics --- 00:29:49.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.250 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=346071 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 346071 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 346071 ']' 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.250 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.250 [2024-12-07 00:57:05.258643] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:49.250 [2024-12-07 00:57:05.258735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.250 [2024-12-07 00:57:05.333359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.250 [2024-12-07 00:57:05.380802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.250 [2024-12-07 00:57:05.380859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.250 [2024-12-07 00:57:05.380887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.250 [2024-12-07 00:57:05.380898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.250 [2024-12-07 00:57:05.380907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.250 [2024-12-07 00:57:05.382592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.250 [2024-12-07 00:57:05.382674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.250 [2024-12-07 00:57:05.382615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.250 [2024-12-07 00:57:05.382677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.507 [2024-12-07 00:57:05.527235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.507 Malloc0 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.507 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.508 [2024-12-07 00:57:05.589360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.508 [ 00:29:49.508 { 00:29:49.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.508 "subtype": "Discovery", 00:29:49.508 "listen_addresses": [], 00:29:49.508 "allow_any_host": true, 00:29:49.508 "hosts": [] 00:29:49.508 }, 00:29:49.508 { 00:29:49.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.508 "subtype": "NVMe", 00:29:49.508 "listen_addresses": [ 00:29:49.508 { 00:29:49.508 "trtype": "TCP", 00:29:49.508 "adrfam": "IPv4", 00:29:49.508 "traddr": "10.0.0.2", 00:29:49.508 "trsvcid": "4420" 00:29:49.508 } 00:29:49.508 ], 00:29:49.508 "allow_any_host": true, 00:29:49.508 "hosts": [], 00:29:49.508 "serial_number": "SPDK00000000000001", 00:29:49.508 "model_number": "SPDK bdev Controller", 00:29:49.508 "max_namespaces": 2, 00:29:49.508 "min_cntlid": 1, 00:29:49.508 "max_cntlid": 65519, 00:29:49.508 "namespaces": [ 00:29:49.508 { 00:29:49.508 "nsid": 1, 00:29:49.508 "bdev_name": "Malloc0", 00:29:49.508 "name": "Malloc0", 00:29:49.508 "nguid": "53AA2237F87749EC83CA55973E852495", 00:29:49.508 "uuid": "53aa2237-f877-49ec-83ca-55973e852495" 00:29:49.508 } 00:29:49.508 ] 00:29:49.508 } 00:29:49.508 ] 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=346097 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:49.508 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:49.765 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:49.765 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:49.765 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:49.765 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:49.766 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:49.766 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:49.766 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:49.766 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:50.023 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:50.023 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:50.023 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:50.023 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:50.023 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 Malloc1 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 [ 00:29:50.024 { 00:29:50.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:50.024 "subtype": "Discovery", 00:29:50.024 "listen_addresses": [], 00:29:50.024 "allow_any_host": true, 00:29:50.024 "hosts": [] 00:29:50.024 }, 00:29:50.024 { 00:29:50.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.024 "subtype": "NVMe", 00:29:50.024 "listen_addresses": [ 00:29:50.024 { 00:29:50.024 "trtype": "TCP", 00:29:50.024 "adrfam": "IPv4", 00:29:50.024 "traddr": "10.0.0.2", 00:29:50.024 "trsvcid": "4420" 00:29:50.024 } 00:29:50.024 ], 00:29:50.024 "allow_any_host": true, 00:29:50.024 "hosts": [], 00:29:50.024 "serial_number": "SPDK00000000000001", 00:29:50.024 "model_number": "SPDK bdev Controller", 00:29:50.024 "max_namespaces": 2, 00:29:50.024 "min_cntlid": 1, 00:29:50.024 "max_cntlid": 65519, 00:29:50.024 "namespaces": [ 00:29:50.024 { 00:29:50.024 "nsid": 1, 00:29:50.024 "bdev_name": "Malloc0", 00:29:50.024 "name": "Malloc0", 00:29:50.024 "nguid": "53AA2237F87749EC83CA55973E852495", 00:29:50.024 "uuid": "53aa2237-f877-49ec-83ca-55973e852495" 00:29:50.024 }, 00:29:50.024 { 00:29:50.024 "nsid": 2, 00:29:50.024 "bdev_name": "Malloc1", 00:29:50.024 "name": "Malloc1", 00:29:50.024 "nguid": "88335227583B4C65810940A888400822", 00:29:50.024 "uuid": "88335227-583b-4c65-8109-40a888400822" 00:29:50.024 } 00:29:50.024 ] 00:29:50.024 } 00:29:50.024 ] 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 346097 00:29:50.024 Asynchronous Event Request test 00:29:50.024 Attaching to 10.0.0.2 00:29:50.024 Attached to 10.0.0.2 00:29:50.024 Registering asynchronous event callbacks... 00:29:50.024 Starting namespace attribute notice tests for all controllers... 00:29:50.024 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:50.024 aer_cb - Changed Namespace 00:29:50.024 Cleaning up... 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.024 rmmod nvme_tcp 00:29:50.024 rmmod nvme_fabrics 00:29:50.024 rmmod nvme_keyring 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 346071 ']' 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 346071 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 346071 ']' 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 346071 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346071 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346071' 00:29:50.024 killing process with pid 346071 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 346071 00:29:50.024 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 346071 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.282 00:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.816 00:29:52.816 real 0m5.636s 00:29:52.816 user 0m4.664s 00:29:52.816 sys 0m2.054s 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:52.816 ************************************ 00:29:52.816 END TEST nvmf_aer 00:29:52.816 ************************************ 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.816 ************************************ 00:29:52.816 START TEST nvmf_async_init 00:29:52.816 ************************************ 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:52.816 * Looking for test storage... 00:29:52.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:52.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.816 --rc genhtml_branch_coverage=1 00:29:52.816 --rc genhtml_function_coverage=1 00:29:52.816 --rc genhtml_legend=1 00:29:52.816 --rc geninfo_all_blocks=1 00:29:52.816 --rc geninfo_unexecuted_blocks=1 00:29:52.816 00:29:52.816 ' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:52.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.816 --rc genhtml_branch_coverage=1 00:29:52.816 --rc genhtml_function_coverage=1 00:29:52.816 --rc genhtml_legend=1 00:29:52.816 --rc geninfo_all_blocks=1 00:29:52.816 --rc geninfo_unexecuted_blocks=1 00:29:52.816 00:29:52.816 ' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:52.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.816 --rc genhtml_branch_coverage=1 00:29:52.816 --rc genhtml_function_coverage=1 00:29:52.816 --rc genhtml_legend=1 00:29:52.816 --rc geninfo_all_blocks=1 00:29:52.816 --rc geninfo_unexecuted_blocks=1 00:29:52.816 00:29:52.816 ' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:52.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.816 --rc genhtml_branch_coverage=1 00:29:52.816 --rc genhtml_function_coverage=1 00:29:52.816 --rc genhtml_legend=1 00:29:52.816 --rc geninfo_all_blocks=1 00:29:52.816 --rc geninfo_unexecuted_blocks=1 00:29:52.816 00:29:52.816 ' 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.816 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d389c18bb2b24f989a3c218d936b8bc4 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.817 00:57:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.723 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:29:54.724 00:29:54.724 --- 10.0.0.2 ping statistics --- 00:29:54.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.724 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:29:54.724 00:29:54.724 --- 10.0.0.1 ping statistics --- 00:29:54.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.724 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=348672 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:54.724 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 348672 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 348672 ']' 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.725 00:57:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.725 [2024-12-07 00:57:10.846356] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:54.725 [2024-12-07 00:57:10.846463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.985 [2024-12-07 00:57:10.921791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.985 [2024-12-07 00:57:10.966324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.985 [2024-12-07 00:57:10.966400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.985 [2024-12-07 00:57:10.966424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.985 [2024-12-07 00:57:10.966458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.985 [2024-12-07 00:57:10.966468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.985 [2024-12-07 00:57:10.967067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.985 [2024-12-07 00:57:11.108993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.985 null0 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.985 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d389c18bb2b24f989a3c218d936b8bc4 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.246 [2024-12-07 00:57:11.149262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.246 nvme0n1 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.246 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.246 [ 00:29:55.246 { 00:29:55.246 "name": "nvme0n1", 00:29:55.246 "aliases": [ 00:29:55.246 "d389c18b-b2b2-4f98-9a3c-218d936b8bc4" 00:29:55.246 ], 00:29:55.246 "product_name": "NVMe disk", 00:29:55.246 "block_size": 512, 00:29:55.246 "num_blocks": 2097152, 00:29:55.246 "uuid": "d389c18b-b2b2-4f98-9a3c-218d936b8bc4", 00:29:55.246 "numa_id": 0, 00:29:55.246 "assigned_rate_limits": { 00:29:55.246 "rw_ios_per_sec": 0, 00:29:55.246 "rw_mbytes_per_sec": 0, 00:29:55.246 "r_mbytes_per_sec": 0, 00:29:55.246 "w_mbytes_per_sec": 0 00:29:55.246 }, 00:29:55.246 "claimed": false, 00:29:55.246 "zoned": false, 00:29:55.246 "supported_io_types": { 00:29:55.246 "read": true, 00:29:55.246 "write": true, 00:29:55.246 "unmap": false, 00:29:55.246 "flush": true, 00:29:55.246 "reset": true, 00:29:55.246 "nvme_admin": true, 00:29:55.246 "nvme_io": true, 00:29:55.246 "nvme_io_md": false, 00:29:55.246 "write_zeroes": true, 00:29:55.246 "zcopy": false, 00:29:55.246 "get_zone_info": false, 00:29:55.246 "zone_management": false, 00:29:55.246 "zone_append": false, 00:29:55.246 "compare": true, 00:29:55.246 "compare_and_write": true, 00:29:55.246 "abort": true, 00:29:55.246 "seek_hole": false, 00:29:55.246 "seek_data": false, 00:29:55.246 "copy": true, 00:29:55.246 "nvme_iov_md": false 00:29:55.246 }, 00:29:55.246 "memory_domains": [ 00:29:55.246 { 00:29:55.246 "dma_device_id": "system", 00:29:55.246 "dma_device_type": 1 00:29:55.246 } 00:29:55.246 ], 00:29:55.246 "driver_specific": { 00:29:55.246 "nvme": [ 00:29:55.246 { 00:29:55.246 "trid": { 00:29:55.246 "trtype": "TCP", 00:29:55.246 "adrfam": "IPv4", 00:29:55.246 "traddr": "10.0.0.2", 00:29:55.246 "trsvcid": "4420", 00:29:55.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.246 }, 00:29:55.246 "ctrlr_data": { 00:29:55.246 "cntlid": 1, 00:29:55.246 "vendor_id": "0x8086", 00:29:55.246 "model_number": "SPDK bdev Controller", 00:29:55.246 "serial_number": "00000000000000000000", 00:29:55.246 "firmware_revision": "25.01", 00:29:55.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.246 "oacs": { 00:29:55.246 "security": 0, 00:29:55.506 "format": 0, 00:29:55.506 "firmware": 0, 00:29:55.506 "ns_manage": 0 00:29:55.506 }, 00:29:55.506 "multi_ctrlr": true, 00:29:55.506 "ana_reporting": false 00:29:55.506 }, 00:29:55.506 "vs": { 00:29:55.506 "nvme_version": "1.3" 00:29:55.506 }, 00:29:55.506 "ns_data": { 00:29:55.506 "id": 1, 00:29:55.506 "can_share": true 00:29:55.506 } 00:29:55.506 } 00:29:55.506 ], 00:29:55.506 "mp_policy": "active_passive" 00:29:55.506 } 00:29:55.506 } 00:29:55.506 ] 00:29:55.506 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.506 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:55.506 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.506 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.506 [2024-12-07 00:57:11.397741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:55.506 [2024-12-07 00:57:11.397821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ba250 (9): Bad file descriptor 00:29:55.507 [2024-12-07 00:57:11.530124] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 [ 00:29:55.507 { 00:29:55.507 "name": "nvme0n1", 00:29:55.507 "aliases": [ 00:29:55.507 "d389c18b-b2b2-4f98-9a3c-218d936b8bc4" 00:29:55.507 ], 00:29:55.507 "product_name": "NVMe disk", 00:29:55.507 "block_size": 512, 00:29:55.507 "num_blocks": 2097152, 00:29:55.507 "uuid": "d389c18b-b2b2-4f98-9a3c-218d936b8bc4", 00:29:55.507 "numa_id": 0, 00:29:55.507 "assigned_rate_limits": { 00:29:55.507 "rw_ios_per_sec": 0, 00:29:55.507 "rw_mbytes_per_sec": 0, 00:29:55.507 "r_mbytes_per_sec": 0, 00:29:55.507 "w_mbytes_per_sec": 0 00:29:55.507 }, 00:29:55.507 "claimed": false, 00:29:55.507 "zoned": false, 00:29:55.507 "supported_io_types": { 00:29:55.507 "read": true, 00:29:55.507 "write": true, 00:29:55.507 "unmap": false, 00:29:55.507 "flush": true, 00:29:55.507 "reset": true, 00:29:55.507 "nvme_admin": true, 00:29:55.507 "nvme_io": true, 00:29:55.507 "nvme_io_md": false, 00:29:55.507 "write_zeroes": true, 00:29:55.507 "zcopy": false, 00:29:55.507 "get_zone_info": false, 00:29:55.507 "zone_management": false, 00:29:55.507 "zone_append": false, 00:29:55.507 "compare": true, 00:29:55.507 "compare_and_write": true, 00:29:55.507 "abort": true, 00:29:55.507 "seek_hole": false, 00:29:55.507 "seek_data": false, 00:29:55.507 "copy": true, 00:29:55.507 "nvme_iov_md": false 00:29:55.507 }, 00:29:55.507 "memory_domains": [ 00:29:55.507 { 00:29:55.507 "dma_device_id": "system", 00:29:55.507 "dma_device_type": 1 00:29:55.507 } 00:29:55.507 ], 00:29:55.507 "driver_specific": { 00:29:55.507 "nvme": [ 00:29:55.507 { 00:29:55.507 "trid": { 00:29:55.507 "trtype": "TCP", 00:29:55.507 "adrfam": "IPv4", 00:29:55.507 "traddr": "10.0.0.2", 00:29:55.507 "trsvcid": "4420", 00:29:55.507 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.507 }, 00:29:55.507 "ctrlr_data": { 00:29:55.507 "cntlid": 2, 00:29:55.507 "vendor_id": "0x8086", 00:29:55.507 "model_number": "SPDK bdev Controller", 00:29:55.507 "serial_number": "00000000000000000000", 00:29:55.507 "firmware_revision": "25.01", 00:29:55.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.507 "oacs": { 00:29:55.507 "security": 0, 00:29:55.507 "format": 0, 00:29:55.507 "firmware": 0, 00:29:55.507 "ns_manage": 0 00:29:55.507 }, 00:29:55.507 "multi_ctrlr": true, 00:29:55.507 "ana_reporting": false 00:29:55.507 }, 00:29:55.507 "vs": { 00:29:55.507 "nvme_version": "1.3" 00:29:55.507 }, 00:29:55.507 "ns_data": { 00:29:55.507 "id": 1, 00:29:55.507 "can_share": true 00:29:55.507 } 00:29:55.507 } 00:29:55.507 ], 00:29:55.507 "mp_policy": "active_passive" 00:29:55.507 } 00:29:55.507 } 00:29:55.507 ] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.D6zaYT2mm6 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.D6zaYT2mm6 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.D6zaYT2mm6 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 [2024-12-07 00:57:11.586339] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:55.507 [2024-12-07 00:57:11.586457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.507 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.507 [2024-12-07 00:57:11.602389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:55.767 nvme0n1 00:29:55.767 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.767 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:55.767 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.767 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.767 [ 00:29:55.767 { 00:29:55.767 "name": "nvme0n1", 00:29:55.767 "aliases": [ 00:29:55.767 "d389c18b-b2b2-4f98-9a3c-218d936b8bc4" 00:29:55.767 ], 00:29:55.767 "product_name": "NVMe disk", 00:29:55.767 "block_size": 512, 00:29:55.767 "num_blocks": 2097152, 00:29:55.767 "uuid": "d389c18b-b2b2-4f98-9a3c-218d936b8bc4", 00:29:55.767 "numa_id": 0, 00:29:55.767 "assigned_rate_limits": { 00:29:55.767 "rw_ios_per_sec": 0, 00:29:55.767 "rw_mbytes_per_sec": 0, 00:29:55.767 "r_mbytes_per_sec": 0, 00:29:55.767 "w_mbytes_per_sec": 0 00:29:55.767 }, 00:29:55.767 "claimed": false, 00:29:55.767 "zoned": false, 00:29:55.767 "supported_io_types": { 00:29:55.767 "read": true, 00:29:55.767 "write": true, 00:29:55.767 "unmap": false, 00:29:55.767 "flush": true, 00:29:55.767 "reset": true, 00:29:55.767 "nvme_admin": true, 00:29:55.767 "nvme_io": true, 00:29:55.767 "nvme_io_md": false, 00:29:55.767 "write_zeroes": true, 00:29:55.767 "zcopy": false, 00:29:55.767 "get_zone_info": false, 00:29:55.767 "zone_management": false, 00:29:55.767 "zone_append": false, 00:29:55.767 "compare": true, 00:29:55.767 "compare_and_write": true, 00:29:55.767 "abort": true, 00:29:55.767 "seek_hole": false, 00:29:55.767 "seek_data": false, 00:29:55.767 "copy": true, 00:29:55.767 "nvme_iov_md": false 00:29:55.767 }, 00:29:55.767 "memory_domains": [ 00:29:55.767 { 00:29:55.767 "dma_device_id": "system", 00:29:55.767 "dma_device_type": 1 00:29:55.767 } 00:29:55.767 ], 00:29:55.767 "driver_specific": { 00:29:55.767 "nvme": [ 00:29:55.767 { 00:29:55.767 "trid": { 00:29:55.767 "trtype": "TCP", 00:29:55.767 "adrfam": "IPv4", 00:29:55.767 "traddr": "10.0.0.2", 00:29:55.767 "trsvcid": "4421", 00:29:55.767 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.767 }, 00:29:55.767 "ctrlr_data": { 00:29:55.767 "cntlid": 3, 00:29:55.767 "vendor_id": "0x8086", 00:29:55.767 "model_number": "SPDK bdev Controller", 00:29:55.767 "serial_number": "00000000000000000000", 00:29:55.767 "firmware_revision": "25.01", 00:29:55.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.767 "oacs": { 00:29:55.767 "security": 0, 00:29:55.767 "format": 0, 00:29:55.767 "firmware": 0, 00:29:55.767 "ns_manage": 0 00:29:55.767 }, 00:29:55.767 "multi_ctrlr": true, 00:29:55.767 "ana_reporting": false 00:29:55.767 }, 00:29:55.767 "vs": { 00:29:55.767 "nvme_version": "1.3" 00:29:55.767 }, 00:29:55.767 "ns_data": { 00:29:55.767 "id": 1, 00:29:55.767 "can_share": true 00:29:55.767 } 00:29:55.767 } 00:29:55.767 ], 00:29:55.767 "mp_policy": "active_passive" 00:29:55.767 } 00:29:55.767 } 00:29:55.767 ] 00:29:55.767 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.D6zaYT2mm6 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.768 rmmod nvme_tcp 00:29:55.768 rmmod nvme_fabrics 00:29:55.768 rmmod nvme_keyring 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 348672 ']' 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 348672 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 348672 ']' 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 348672 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348672 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348672' 00:29:55.768 killing process with pid 348672 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 348672 00:29:55.768 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 348672 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.027 00:57:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.937 00:29:57.937 real 0m5.592s 00:29:57.937 user 0m2.128s 00:29:57.937 sys 0m1.871s 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.937 ************************************ 00:29:57.937 END TEST nvmf_async_init 00:29:57.937 ************************************ 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.937 ************************************ 00:29:57.937 START TEST dma 00:29:57.937 ************************************ 00:29:57.937 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:58.196 * Looking for test storage... 00:29:58.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.196 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.197 --rc genhtml_branch_coverage=1 00:29:58.197 --rc genhtml_function_coverage=1 00:29:58.197 --rc genhtml_legend=1 00:29:58.197 --rc geninfo_all_blocks=1 00:29:58.197 --rc geninfo_unexecuted_blocks=1 00:29:58.197 00:29:58.197 ' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.197 --rc genhtml_branch_coverage=1 00:29:58.197 --rc genhtml_function_coverage=1 00:29:58.197 --rc genhtml_legend=1 00:29:58.197 --rc geninfo_all_blocks=1 00:29:58.197 --rc geninfo_unexecuted_blocks=1 00:29:58.197 00:29:58.197 ' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.197 --rc genhtml_branch_coverage=1 00:29:58.197 --rc genhtml_function_coverage=1 00:29:58.197 --rc genhtml_legend=1 00:29:58.197 --rc geninfo_all_blocks=1 00:29:58.197 --rc geninfo_unexecuted_blocks=1 00:29:58.197 00:29:58.197 ' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.197 --rc genhtml_branch_coverage=1 00:29:58.197 --rc genhtml_function_coverage=1 00:29:58.197 --rc genhtml_legend=1 00:29:58.197 --rc geninfo_all_blocks=1 00:29:58.197 --rc geninfo_unexecuted_blocks=1 00:29:58.197 00:29:58.197 ' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:58.197 00:57:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:58.197 00:29:58.197 real 0m0.154s 00:29:58.197 user 0m0.101s 00:29:58.198 sys 0m0.060s 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:58.198 ************************************ 00:29:58.198 END TEST dma 00:29:58.198 ************************************ 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.198 ************************************ 00:29:58.198 START TEST nvmf_identify 00:29:58.198 ************************************ 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:58.198 * Looking for test storage... 00:29:58.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.198 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.457 --rc genhtml_branch_coverage=1 00:29:58.457 --rc genhtml_function_coverage=1 00:29:58.457 --rc genhtml_legend=1 00:29:58.457 --rc geninfo_all_blocks=1 00:29:58.457 --rc geninfo_unexecuted_blocks=1 00:29:58.457 00:29:58.457 ' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.457 --rc genhtml_branch_coverage=1 00:29:58.457 --rc genhtml_function_coverage=1 00:29:58.457 --rc genhtml_legend=1 00:29:58.457 --rc geninfo_all_blocks=1 00:29:58.457 --rc geninfo_unexecuted_blocks=1 00:29:58.457 00:29:58.457 ' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.457 --rc genhtml_branch_coverage=1 00:29:58.457 --rc genhtml_function_coverage=1 00:29:58.457 --rc genhtml_legend=1 00:29:58.457 --rc geninfo_all_blocks=1 00:29:58.457 --rc geninfo_unexecuted_blocks=1 00:29:58.457 00:29:58.457 ' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.457 --rc genhtml_branch_coverage=1 00:29:58.457 --rc genhtml_function_coverage=1 00:29:58.457 --rc genhtml_legend=1 00:29:58.457 --rc geninfo_all_blocks=1 00:29:58.457 --rc geninfo_unexecuted_blocks=1 00:29:58.457 00:29:58.457 ' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.457 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.458 00:57:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.994 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:00.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:00.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:00.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:00.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:30:00.995 00:30:00.995 --- 10.0.0.2 ping statistics --- 00:30:00.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.995 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:00.995 00:30:00.995 --- 10.0.0.1 ping statistics --- 00:30:00.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.995 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=350814 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 350814 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 350814 ']' 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.995 00:57:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.995 [2024-12-07 00:57:16.793199] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:00.995 [2024-12-07 00:57:16.793291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.995 [2024-12-07 00:57:16.865916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.995 [2024-12-07 00:57:16.911105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.995 [2024-12-07 00:57:16.911159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.995 [2024-12-07 00:57:16.911183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.995 [2024-12-07 00:57:16.911195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.995 [2024-12-07 00:57:16.911204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.995 [2024-12-07 00:57:16.912594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.995 [2024-12-07 00:57:16.912703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.995 [2024-12-07 00:57:16.912803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.995 [2024-12-07 00:57:16.912811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.995 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.995 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:00.995 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 [2024-12-07 00:57:17.034196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 Malloc0 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:00.996 [2024-12-07 00:57:17.132142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.996 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.257 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.257 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:01.257 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.257 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.257 [ 00:30:01.257 { 00:30:01.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:01.257 "subtype": "Discovery", 00:30:01.257 "listen_addresses": [ 00:30:01.257 { 00:30:01.257 "trtype": "TCP", 00:30:01.257 "adrfam": "IPv4", 00:30:01.257 "traddr": "10.0.0.2", 00:30:01.257 "trsvcid": "4420" 00:30:01.257 } 00:30:01.257 ], 00:30:01.257 "allow_any_host": true, 00:30:01.257 "hosts": [] 00:30:01.257 }, 00:30:01.257 { 00:30:01.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.257 "subtype": "NVMe", 00:30:01.257 "listen_addresses": [ 00:30:01.257 { 00:30:01.257 "trtype": "TCP", 00:30:01.257 "adrfam": "IPv4", 00:30:01.257 "traddr": "10.0.0.2", 00:30:01.257 "trsvcid": "4420" 00:30:01.257 } 00:30:01.257 ], 00:30:01.257 "allow_any_host": true, 00:30:01.257 "hosts": [], 00:30:01.257 "serial_number": "SPDK00000000000001", 00:30:01.257 "model_number": "SPDK bdev Controller", 00:30:01.257 "max_namespaces": 32, 00:30:01.257 "min_cntlid": 1, 00:30:01.258 "max_cntlid": 65519, 00:30:01.258 "namespaces": [ 00:30:01.258 { 00:30:01.258 "nsid": 1, 00:30:01.258 "bdev_name": "Malloc0", 00:30:01.258 "name": "Malloc0", 00:30:01.258 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:01.258 "eui64": "ABCDEF0123456789", 00:30:01.258 "uuid": "bf16a9d6-8896-41f1-b15c-db0e63c591fa" 00:30:01.258 } 00:30:01.258 ] 00:30:01.258 } 00:30:01.258 ] 00:30:01.258 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.258 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:01.258 [2024-12-07 00:57:17.174365] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:01.258 [2024-12-07 00:57:17.174408] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350836 ] 00:30:01.258 [2024-12-07 00:57:17.226202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:01.258 [2024-12-07 00:57:17.226302] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:01.258 [2024-12-07 00:57:17.226314] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:01.258 [2024-12-07 00:57:17.226334] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:01.258 [2024-12-07 00:57:17.226350] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:01.258 [2024-12-07 00:57:17.230442] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:01.258 [2024-12-07 00:57:17.230517] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcd3d80 0 00:30:01.258 [2024-12-07 00:57:17.230647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:01.258 [2024-12-07 00:57:17.230669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:01.258 [2024-12-07 00:57:17.230683] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:01.258 [2024-12-07 00:57:17.230690] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:01.258 [2024-12-07 00:57:17.230743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.230757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.230765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.230788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:01.258 [2024-12-07 00:57:17.230831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.238010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.238028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.238036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.238069] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:01.258 [2024-12-07 00:57:17.238082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:01.258 [2024-12-07 00:57:17.238092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:01.258 [2024-12-07 00:57:17.238119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.238145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.238168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.238310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.238323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.238330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.238353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:01.258 [2024-12-07 00:57:17.238368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:01.258 [2024-12-07 00:57:17.238380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.238409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.238431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.238563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.238577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.238584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.238602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:01.258 [2024-12-07 00:57:17.238617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.238629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.238653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.238674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.238758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.238770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.238777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.238793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.238809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.238835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.238855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.238945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.238959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.238966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.238973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.238983] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:01.258 [2024-12-07 00:57:17.238991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.239014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.239126] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:01.258 [2024-12-07 00:57:17.239135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.239159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.239185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.239221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.239351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.239366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.239373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.258 [2024-12-07 00:57:17.239390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:01.258 [2024-12-07 00:57:17.239407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.258 [2024-12-07 00:57:17.239433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.258 [2024-12-07 00:57:17.239454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.258 [2024-12-07 00:57:17.239547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.258 [2024-12-07 00:57:17.239559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.258 [2024-12-07 00:57:17.239566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.258 [2024-12-07 00:57:17.239573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.259 [2024-12-07 00:57:17.239580] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:01.259 [2024-12-07 00:57:17.239590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.239603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:01.259 [2024-12-07 00:57:17.239622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.239641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.239660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.259 [2024-12-07 00:57:17.239680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.259 [2024-12-07 00:57:17.239830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.259 [2024-12-07 00:57:17.239845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.259 [2024-12-07 00:57:17.239852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239859] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd3d80): datao=0, datal=4096, cccid=0 00:30:01.259 [2024-12-07 00:57:17.239881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd3f480) on tqpair(0xcd3d80): expected_datao=0, payload_size=4096 00:30:01.259 [2024-12-07 00:57:17.239895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239907] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.259 [2024-12-07 00:57:17.239951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.259 [2024-12-07 00:57:17.239958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.239964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.259 [2024-12-07 00:57:17.239984] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:01.259 [2024-12-07 00:57:17.240000] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:01.259 [2024-12-07 00:57:17.240009] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:01.259 [2024-12-07 00:57:17.240020] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:01.259 [2024-12-07 00:57:17.240030] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:01.259 [2024-12-07 00:57:17.240038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.240053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.240066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:01.259 [2024-12-07 00:57:17.240113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.259 [2024-12-07 00:57:17.240241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.259 [2024-12-07 00:57:17.240255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.259 [2024-12-07 00:57:17.240262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.259 [2024-12-07 00:57:17.240282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.259 [2024-12-07 00:57:17.240316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.259 [2024-12-07 00:57:17.240347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.259 [2024-12-07 00:57:17.240383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.259 [2024-12-07 00:57:17.240429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.240450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:01.259 [2024-12-07 00:57:17.240463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.259 [2024-12-07 00:57:17.240515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f480, cid 0, qid 0 00:30:01.259 [2024-12-07 00:57:17.240526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f600, cid 1, qid 0 00:30:01.259 [2024-12-07 00:57:17.240533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f780, cid 2, qid 0 00:30:01.259 [2024-12-07 00:57:17.240541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.259 [2024-12-07 00:57:17.240548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fa80, cid 4, qid 0 00:30:01.259 [2024-12-07 00:57:17.240688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.259 [2024-12-07 00:57:17.240702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.259 [2024-12-07 00:57:17.240709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fa80) on tqpair=0xcd3d80 00:30:01.259 [2024-12-07 00:57:17.240727] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:01.259 [2024-12-07 00:57:17.240736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:01.259 [2024-12-07 00:57:17.240753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.240773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.259 [2024-12-07 00:57:17.240794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fa80, cid 4, qid 0 00:30:01.259 [2024-12-07 00:57:17.240928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.259 [2024-12-07 00:57:17.240944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.259 [2024-12-07 00:57:17.240952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240958] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd3d80): datao=0, datal=4096, cccid=4 00:30:01.259 [2024-12-07 00:57:17.240966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd3fa80) on tqpair(0xcd3d80): expected_datao=0, payload_size=4096 00:30:01.259 [2024-12-07 00:57:17.240974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240984] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.240991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.259 [2024-12-07 00:57:17.241025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.259 [2024-12-07 00:57:17.241033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fa80) on tqpair=0xcd3d80 00:30:01.259 [2024-12-07 00:57:17.241060] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:01.259 [2024-12-07 00:57:17.241101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.241122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.259 [2024-12-07 00:57:17.241134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcd3d80) 00:30:01.259 [2024-12-07 00:57:17.241157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.259 [2024-12-07 00:57:17.241185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fa80, cid 4, qid 0 00:30:01.259 [2024-12-07 00:57:17.241197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fc00, cid 5, qid 0 00:30:01.259 [2024-12-07 00:57:17.241386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.259 [2024-12-07 00:57:17.241401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.259 [2024-12-07 00:57:17.241408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.259 [2024-12-07 00:57:17.241415] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd3d80): datao=0, datal=1024, cccid=4 00:30:01.260 [2024-12-07 00:57:17.241422] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd3fa80) on tqpair(0xcd3d80): expected_datao=0, payload_size=1024 00:30:01.260 [2024-12-07 00:57:17.241430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.241439] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.241446] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.241455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.260 [2024-12-07 00:57:17.241464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.260 [2024-12-07 00:57:17.241470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.241491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fc00) on tqpair=0xcd3d80 00:30:01.260 [2024-12-07 00:57:17.286009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.260 [2024-12-07 00:57:17.286027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.260 [2024-12-07 00:57:17.286035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.286042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fa80) on tqpair=0xcd3d80 00:30:01.260 [2024-12-07 00:57:17.286061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.286070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd3d80) 00:30:01.260 [2024-12-07 00:57:17.286080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.260 [2024-12-07 00:57:17.286110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fa80, cid 4, qid 0 00:30:01.260 [2024-12-07 00:57:17.286228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.260 [2024-12-07 00:57:17.286244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.260 [2024-12-07 00:57:17.286251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.286262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd3d80): datao=0, datal=3072, cccid=4 00:30:01.260 [2024-12-07 00:57:17.286271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd3fa80) on tqpair(0xcd3d80): expected_datao=0, payload_size=3072 00:30:01.260 [2024-12-07 00:57:17.286278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.286300] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.286309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.260 [2024-12-07 00:57:17.328112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.260 [2024-12-07 00:57:17.328119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fa80) on tqpair=0xcd3d80 00:30:01.260 [2024-12-07 00:57:17.328141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcd3d80) 00:30:01.260 [2024-12-07 00:57:17.328161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.260 [2024-12-07 00:57:17.328191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3fa80, cid 4, qid 0 00:30:01.260 [2024-12-07 00:57:17.328281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.260 [2024-12-07 00:57:17.328293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.260 [2024-12-07 00:57:17.328300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328307] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcd3d80): datao=0, datal=8, cccid=4 00:30:01.260 [2024-12-07 00:57:17.328314] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd3fa80) on tqpair(0xcd3d80): expected_datao=0, payload_size=8 00:30:01.260 [2024-12-07 00:57:17.328322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.328338] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.369076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.260 [2024-12-07 00:57:17.369095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.260 [2024-12-07 00:57:17.369102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.260 [2024-12-07 00:57:17.369109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3fa80) on tqpair=0xcd3d80 00:30:01.260 ===================================================== 00:30:01.260 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:01.260 ===================================================== 00:30:01.260 Controller Capabilities/Features 00:30:01.260 ================================ 00:30:01.260 Vendor ID: 0000 00:30:01.260 Subsystem Vendor ID: 0000 00:30:01.260 Serial Number: .................... 00:30:01.260 Model Number: ........................................ 00:30:01.260 Firmware Version: 25.01 00:30:01.260 Recommended Arb Burst: 0 00:30:01.260 IEEE OUI Identifier: 00 00 00 00:30:01.260 Multi-path I/O 00:30:01.260 May have multiple subsystem ports: No 00:30:01.260 May have multiple controllers: No 00:30:01.260 Associated with SR-IOV VF: No 00:30:01.260 Max Data Transfer Size: 131072 00:30:01.260 Max Number of Namespaces: 0 00:30:01.260 Max Number of I/O Queues: 1024 00:30:01.260 NVMe Specification Version (VS): 1.3 00:30:01.260 NVMe Specification Version (Identify): 1.3 00:30:01.260 Maximum Queue Entries: 128 00:30:01.260 Contiguous Queues Required: Yes 00:30:01.260 Arbitration Mechanisms Supported 00:30:01.260 Weighted Round Robin: Not Supported 00:30:01.260 Vendor Specific: Not Supported 00:30:01.260 Reset Timeout: 15000 ms 00:30:01.260 Doorbell Stride: 4 bytes 00:30:01.260 NVM Subsystem Reset: Not Supported 00:30:01.260 Command Sets Supported 00:30:01.260 NVM Command Set: Supported 00:30:01.260 Boot Partition: Not Supported 00:30:01.260 Memory Page Size Minimum: 4096 bytes 00:30:01.260 Memory Page Size Maximum: 4096 bytes 00:30:01.260 Persistent Memory Region: Not Supported 00:30:01.260 Optional Asynchronous Events Supported 00:30:01.260 Namespace Attribute Notices: Not Supported 00:30:01.260 Firmware Activation Notices: Not Supported 00:30:01.260 ANA Change Notices: Not Supported 00:30:01.260 PLE Aggregate Log Change Notices: Not Supported 00:30:01.260 LBA Status Info Alert Notices: Not Supported 00:30:01.260 EGE Aggregate Log Change Notices: Not Supported 00:30:01.260 Normal NVM Subsystem Shutdown event: Not Supported 00:30:01.260 Zone Descriptor Change Notices: Not Supported 00:30:01.260 Discovery Log Change Notices: Supported 00:30:01.260 Controller Attributes 00:30:01.260 128-bit Host Identifier: Not Supported 00:30:01.260 Non-Operational Permissive Mode: Not Supported 00:30:01.260 NVM Sets: Not Supported 00:30:01.260 Read Recovery Levels: Not Supported 00:30:01.260 Endurance Groups: Not Supported 00:30:01.260 Predictable Latency Mode: Not Supported 00:30:01.260 Traffic Based Keep ALive: Not Supported 00:30:01.260 Namespace Granularity: Not Supported 00:30:01.260 SQ Associations: Not Supported 00:30:01.260 UUID List: Not Supported 00:30:01.260 Multi-Domain Subsystem: Not Supported 00:30:01.260 Fixed Capacity Management: Not Supported 00:30:01.260 Variable Capacity Management: Not Supported 00:30:01.260 Delete Endurance Group: Not Supported 00:30:01.260 Delete NVM Set: Not Supported 00:30:01.260 Extended LBA Formats Supported: Not Supported 00:30:01.260 Flexible Data Placement Supported: Not Supported 00:30:01.260 00:30:01.260 Controller Memory Buffer Support 00:30:01.260 ================================ 00:30:01.260 Supported: No 00:30:01.260 00:30:01.260 Persistent Memory Region Support 00:30:01.260 ================================ 00:30:01.260 Supported: No 00:30:01.260 00:30:01.260 Admin Command Set Attributes 00:30:01.260 ============================ 00:30:01.260 Security Send/Receive: Not Supported 00:30:01.260 Format NVM: Not Supported 00:30:01.260 Firmware Activate/Download: Not Supported 00:30:01.260 Namespace Management: Not Supported 00:30:01.260 Device Self-Test: Not Supported 00:30:01.260 Directives: Not Supported 00:30:01.260 NVMe-MI: Not Supported 00:30:01.260 Virtualization Management: Not Supported 00:30:01.260 Doorbell Buffer Config: Not Supported 00:30:01.260 Get LBA Status Capability: Not Supported 00:30:01.260 Command & Feature Lockdown Capability: Not Supported 00:30:01.260 Abort Command Limit: 1 00:30:01.260 Async Event Request Limit: 4 00:30:01.260 Number of Firmware Slots: N/A 00:30:01.260 Firmware Slot 1 Read-Only: N/A 00:30:01.260 Firmware Activation Without Reset: N/A 00:30:01.260 Multiple Update Detection Support: N/A 00:30:01.260 Firmware Update Granularity: No Information Provided 00:30:01.260 Per-Namespace SMART Log: No 00:30:01.260 Asymmetric Namespace Access Log Page: Not Supported 00:30:01.260 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:01.260 Command Effects Log Page: Not Supported 00:30:01.260 Get Log Page Extended Data: Supported 00:30:01.260 Telemetry Log Pages: Not Supported 00:30:01.260 Persistent Event Log Pages: Not Supported 00:30:01.260 Supported Log Pages Log Page: May Support 00:30:01.260 Commands Supported & Effects Log Page: Not Supported 00:30:01.260 Feature Identifiers & Effects Log Page:May Support 00:30:01.260 NVMe-MI Commands & Effects Log Page: May Support 00:30:01.260 Data Area 4 for Telemetry Log: Not Supported 00:30:01.260 Error Log Page Entries Supported: 128 00:30:01.260 Keep Alive: Not Supported 00:30:01.260 00:30:01.260 NVM Command Set Attributes 00:30:01.260 ========================== 00:30:01.260 Submission Queue Entry Size 00:30:01.260 Max: 1 00:30:01.260 Min: 1 00:30:01.260 Completion Queue Entry Size 00:30:01.261 Max: 1 00:30:01.261 Min: 1 00:30:01.261 Number of Namespaces: 0 00:30:01.261 Compare Command: Not Supported 00:30:01.261 Write Uncorrectable Command: Not Supported 00:30:01.261 Dataset Management Command: Not Supported 00:30:01.261 Write Zeroes Command: Not Supported 00:30:01.261 Set Features Save Field: Not Supported 00:30:01.261 Reservations: Not Supported 00:30:01.261 Timestamp: Not Supported 00:30:01.261 Copy: Not Supported 00:30:01.261 Volatile Write Cache: Not Present 00:30:01.261 Atomic Write Unit (Normal): 1 00:30:01.261 Atomic Write Unit (PFail): 1 00:30:01.261 Atomic Compare & Write Unit: 1 00:30:01.261 Fused Compare & Write: Supported 00:30:01.261 Scatter-Gather List 00:30:01.261 SGL Command Set: Supported 00:30:01.261 SGL Keyed: Supported 00:30:01.261 SGL Bit Bucket Descriptor: Not Supported 00:30:01.261 SGL Metadata Pointer: Not Supported 00:30:01.261 Oversized SGL: Not Supported 00:30:01.261 SGL Metadata Address: Not Supported 00:30:01.261 SGL Offset: Supported 00:30:01.261 Transport SGL Data Block: Not Supported 00:30:01.261 Replay Protected Memory Block: Not Supported 00:30:01.261 00:30:01.261 Firmware Slot Information 00:30:01.261 ========================= 00:30:01.261 Active slot: 0 00:30:01.261 00:30:01.261 00:30:01.261 Error Log 00:30:01.261 ========= 00:30:01.261 00:30:01.261 Active Namespaces 00:30:01.261 ================= 00:30:01.261 Discovery Log Page 00:30:01.261 ================== 00:30:01.261 Generation Counter: 2 00:30:01.261 Number of Records: 2 00:30:01.261 Record Format: 0 00:30:01.261 00:30:01.261 Discovery Log Entry 0 00:30:01.261 ---------------------- 00:30:01.261 Transport Type: 3 (TCP) 00:30:01.261 Address Family: 1 (IPv4) 00:30:01.261 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:01.261 Entry Flags: 00:30:01.261 Duplicate Returned Information: 1 00:30:01.261 Explicit Persistent Connection Support for Discovery: 1 00:30:01.261 Transport Requirements: 00:30:01.261 Secure Channel: Not Required 00:30:01.261 Port ID: 0 (0x0000) 00:30:01.261 Controller ID: 65535 (0xffff) 00:30:01.261 Admin Max SQ Size: 128 00:30:01.261 Transport Service Identifier: 4420 00:30:01.261 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:01.261 Transport Address: 10.0.0.2 00:30:01.261 Discovery Log Entry 1 00:30:01.261 ---------------------- 00:30:01.261 Transport Type: 3 (TCP) 00:30:01.261 Address Family: 1 (IPv4) 00:30:01.261 Subsystem Type: 2 (NVM Subsystem) 00:30:01.261 Entry Flags: 00:30:01.261 Duplicate Returned Information: 0 00:30:01.261 Explicit Persistent Connection Support for Discovery: 0 00:30:01.261 Transport Requirements: 00:30:01.261 Secure Channel: Not Required 00:30:01.261 Port ID: 0 (0x0000) 00:30:01.261 Controller ID: 65535 (0xffff) 00:30:01.261 Admin Max SQ Size: 128 00:30:01.261 Transport Service Identifier: 4420 00:30:01.261 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:01.261 Transport Address: 10.0.0.2 [2024-12-07 00:57:17.369227] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:01.261 [2024-12-07 00:57:17.369252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f480) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.261 [2024-12-07 00:57:17.369275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f600) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.261 [2024-12-07 00:57:17.369291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f780) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.261 [2024-12-07 00:57:17.369307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.261 [2024-12-07 00:57:17.369336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.261 [2024-12-07 00:57:17.369379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.261 [2024-12-07 00:57:17.369406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.261 [2024-12-07 00:57:17.369501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.261 [2024-12-07 00:57:17.369516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.261 [2024-12-07 00:57:17.369523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.261 [2024-12-07 00:57:17.369569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.261 [2024-12-07 00:57:17.369595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.261 [2024-12-07 00:57:17.369693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.261 [2024-12-07 00:57:17.369706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.261 [2024-12-07 00:57:17.369713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369731] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:01.261 [2024-12-07 00:57:17.369739] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:01.261 [2024-12-07 00:57:17.369755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.261 [2024-12-07 00:57:17.369781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.261 [2024-12-07 00:57:17.369801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.261 [2024-12-07 00:57:17.369884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.261 [2024-12-07 00:57:17.369896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.261 [2024-12-07 00:57:17.369903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.369928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.369944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.261 [2024-12-07 00:57:17.369954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.261 [2024-12-07 00:57:17.369974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.261 [2024-12-07 00:57:17.374022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.261 [2024-12-07 00:57:17.374039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.261 [2024-12-07 00:57:17.374046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.374057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.374075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.374084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.374090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcd3d80) 00:30:01.261 [2024-12-07 00:57:17.374100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.261 [2024-12-07 00:57:17.374122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd3f900, cid 3, qid 0 00:30:01.261 [2024-12-07 00:57:17.374227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.261 [2024-12-07 00:57:17.374241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.261 [2024-12-07 00:57:17.374248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.261 [2024-12-07 00:57:17.374255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd3f900) on tqpair=0xcd3d80 00:30:01.261 [2024-12-07 00:57:17.374269] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:30:01.261 00:30:01.261 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:01.525 [2024-12-07 00:57:17.408599] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:01.525 [2024-12-07 00:57:17.408640] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350844 ] 00:30:01.525 [2024-12-07 00:57:17.457363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:01.525 [2024-12-07 00:57:17.457420] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:01.525 [2024-12-07 00:57:17.457430] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:01.525 [2024-12-07 00:57:17.457447] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:01.525 [2024-12-07 00:57:17.457459] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:01.525 [2024-12-07 00:57:17.461260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:01.525 [2024-12-07 00:57:17.461314] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14e2d80 0 00:30:01.525 [2024-12-07 00:57:17.461433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:01.525 [2024-12-07 00:57:17.461451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:01.525 [2024-12-07 00:57:17.461462] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:01.525 [2024-12-07 00:57:17.461469] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:01.525 [2024-12-07 00:57:17.461500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.525 [2024-12-07 00:57:17.461513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.525 [2024-12-07 00:57:17.461520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.525 [2024-12-07 00:57:17.461533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:01.525 [2024-12-07 00:57:17.461559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.525 [2024-12-07 00:57:17.468009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.525 [2024-12-07 00:57:17.468029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.525 [2024-12-07 00:57:17.468037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.525 [2024-12-07 00:57:17.468044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.525 [2024-12-07 00:57:17.468064] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:01.525 [2024-12-07 00:57:17.468077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:01.525 [2024-12-07 00:57:17.468086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:01.526 [2024-12-07 00:57:17.468106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.468136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.468162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.468311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.468327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.468334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.468353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:01.526 [2024-12-07 00:57:17.468369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:01.526 [2024-12-07 00:57:17.468385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.468410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.468432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.468564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.468580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.468587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.468603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:01.526 [2024-12-07 00:57:17.468620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.468633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.468660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.468684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.468776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.468795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.468804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.468822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.468840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.468858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.468871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.468894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.469026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.469043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.469050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.469064] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:01.526 [2024-12-07 00:57:17.469073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.469087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.469199] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:01.526 [2024-12-07 00:57:17.469208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.469220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.469260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.469282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.469412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.469427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.469449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.469468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:01.526 [2024-12-07 00:57:17.469486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.469515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.469537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.469672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.469688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.469695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.469710] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:01.526 [2024-12-07 00:57:17.469718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:01.526 [2024-12-07 00:57:17.469733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:01.526 [2024-12-07 00:57:17.469753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:01.526 [2024-12-07 00:57:17.469768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.469776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.469786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.526 [2024-12-07 00:57:17.469825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.469965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.526 [2024-12-07 00:57:17.469980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.526 [2024-12-07 00:57:17.469987] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=4096, cccid=0 00:30:01.526 [2024-12-07 00:57:17.470019] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154e480) on tqpair(0x14e2d80): expected_datao=0, payload_size=4096 00:30:01.526 [2024-12-07 00:57:17.470030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470050] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.526 [2024-12-07 00:57:17.470092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.526 [2024-12-07 00:57:17.470099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.526 [2024-12-07 00:57:17.470122] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:01.526 [2024-12-07 00:57:17.470132] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:01.526 [2024-12-07 00:57:17.470141] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:01.526 [2024-12-07 00:57:17.470147] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:01.526 [2024-12-07 00:57:17.470155] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:01.526 [2024-12-07 00:57:17.470163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:01.526 [2024-12-07 00:57:17.470179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:01.526 [2024-12-07 00:57:17.470194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.526 [2024-12-07 00:57:17.470213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.526 [2024-12-07 00:57:17.470225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:01.526 [2024-12-07 00:57:17.470248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.526 [2024-12-07 00:57:17.470339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.470354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.470361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.527 [2024-12-07 00:57:17.470382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.527 [2024-12-07 00:57:17.470416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.527 [2024-12-07 00:57:17.470447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.527 [2024-12-07 00:57:17.470478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.527 [2024-12-07 00:57:17.470508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.470529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.470558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.527 [2024-12-07 00:57:17.470598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e480, cid 0, qid 0 00:30:01.527 [2024-12-07 00:57:17.470610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e600, cid 1, qid 0 00:30:01.527 [2024-12-07 00:57:17.470617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e780, cid 2, qid 0 00:30:01.527 [2024-12-07 00:57:17.470624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.527 [2024-12-07 00:57:17.470631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.527 [2024-12-07 00:57:17.470776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.470792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.470799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.527 [2024-12-07 00:57:17.470814] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:01.527 [2024-12-07 00:57:17.470822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.470838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.470852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.470864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.470877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.470888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:01.527 [2024-12-07 00:57:17.470925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.527 [2024-12-07 00:57:17.471058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.471074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.471081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.471088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.527 [2024-12-07 00:57:17.471157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.471179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.471197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.471205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.471215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.527 [2024-12-07 00:57:17.471238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.527 [2024-12-07 00:57:17.471369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.527 [2024-12-07 00:57:17.471384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.527 [2024-12-07 00:57:17.471391] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.471397] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=4096, cccid=4 00:30:01.527 [2024-12-07 00:57:17.471424] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ea80) on tqpair(0x14e2d80): expected_datao=0, payload_size=4096 00:30:01.527 [2024-12-07 00:57:17.471435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.471454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.471463] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.515029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.515038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.527 [2024-12-07 00:57:17.515068] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:01.527 [2024-12-07 00:57:17.515093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.515116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.515132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.515153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.527 [2024-12-07 00:57:17.515177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.527 [2024-12-07 00:57:17.515307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.527 [2024-12-07 00:57:17.515323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.527 [2024-12-07 00:57:17.515330] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515336] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=4096, cccid=4 00:30:01.527 [2024-12-07 00:57:17.515344] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ea80) on tqpair(0x14e2d80): expected_datao=0, payload_size=4096 00:30:01.527 [2024-12-07 00:57:17.515357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515380] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.515390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.556151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.556159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.527 [2024-12-07 00:57:17.556196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.556218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:01.527 [2024-12-07 00:57:17.556245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.527 [2024-12-07 00:57:17.556265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.527 [2024-12-07 00:57:17.556289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.527 [2024-12-07 00:57:17.556401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.527 [2024-12-07 00:57:17.556425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.527 [2024-12-07 00:57:17.556451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556457] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=4096, cccid=4 00:30:01.527 [2024-12-07 00:57:17.556465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ea80) on tqpair(0x14e2d80): expected_datao=0, payload_size=4096 00:30:01.527 [2024-12-07 00:57:17.556472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556490] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.556504] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.527 [2024-12-07 00:57:17.598013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.527 [2024-12-07 00:57:17.598034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.527 [2024-12-07 00:57:17.598042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.598064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598146] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:01.528 [2024-12-07 00:57:17.598154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:01.528 [2024-12-07 00:57:17.598163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:01.528 [2024-12-07 00:57:17.598183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.598204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.598215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.598238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.528 [2024-12-07 00:57:17.598266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.528 [2024-12-07 00:57:17.598278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ec00, cid 5, qid 0 00:30:01.528 [2024-12-07 00:57:17.598410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.528 [2024-12-07 00:57:17.598427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.528 [2024-12-07 00:57:17.598434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.598451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.528 [2024-12-07 00:57:17.598460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.528 [2024-12-07 00:57:17.598467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ec00) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.598491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.598522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.598560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ec00, cid 5, qid 0 00:30:01.528 [2024-12-07 00:57:17.598668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.528 [2024-12-07 00:57:17.598686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.528 [2024-12-07 00:57:17.598695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ec00) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.598718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.598741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.598764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ec00, cid 5, qid 0 00:30:01.528 [2024-12-07 00:57:17.598895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.528 [2024-12-07 00:57:17.598911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.528 [2024-12-07 00:57:17.598918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ec00) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.598942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.598953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.598964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.598986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ec00, cid 5, qid 0 00:30:01.528 [2024-12-07 00:57:17.599125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.528 [2024-12-07 00:57:17.599140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.528 [2024-12-07 00:57:17.599147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ec00) on tqpair=0x14e2d80 00:30:01.528 [2024-12-07 00:57:17.599183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.599207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.599219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.599236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.599247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.599264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.599275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14e2d80) 00:30:01.528 [2024-12-07 00:57:17.599295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.528 [2024-12-07 00:57:17.599319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ec00, cid 5, qid 0 00:30:01.528 [2024-12-07 00:57:17.599331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ea80, cid 4, qid 0 00:30:01.528 [2024-12-07 00:57:17.599339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ed80, cid 6, qid 0 00:30:01.528 [2024-12-07 00:57:17.599347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ef00, cid 7, qid 0 00:30:01.528 [2024-12-07 00:57:17.599548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.528 [2024-12-07 00:57:17.599566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.528 [2024-12-07 00:57:17.599578] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=8192, cccid=5 00:30:01.528 [2024-12-07 00:57:17.599602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ec00) on tqpair(0x14e2d80): expected_datao=0, payload_size=8192 00:30:01.528 [2024-12-07 00:57:17.599609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599629] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599639] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.528 [2024-12-07 00:57:17.599670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.528 [2024-12-07 00:57:17.599677] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599683] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=512, cccid=4 00:30:01.528 [2024-12-07 00:57:17.599691] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ea80) on tqpair(0x14e2d80): expected_datao=0, payload_size=512 00:30:01.528 [2024-12-07 00:57:17.599698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599708] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599715] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.528 [2024-12-07 00:57:17.599732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.528 [2024-12-07 00:57:17.599738] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=512, cccid=6 00:30:01.528 [2024-12-07 00:57:17.599751] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ed80) on tqpair(0x14e2d80): expected_datao=0, payload_size=512 00:30:01.528 [2024-12-07 00:57:17.599758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599767] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599774] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:01.528 [2024-12-07 00:57:17.599791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:01.528 [2024-12-07 00:57:17.599797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599803] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14e2d80): datao=0, datal=4096, cccid=7 00:30:01.528 [2024-12-07 00:57:17.599810] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x154ef00) on tqpair(0x14e2d80): expected_datao=0, payload_size=4096 00:30:01.528 [2024-12-07 00:57:17.599818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599827] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599834] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:01.528 [2024-12-07 00:57:17.599850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.529 [2024-12-07 00:57:17.599860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.529 [2024-12-07 00:57:17.599867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.529 [2024-12-07 00:57:17.599874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ec00) on tqpair=0x14e2d80 00:30:01.529 [2024-12-07 00:57:17.599912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.529 [2024-12-07 00:57:17.599923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.529 [2024-12-07 00:57:17.599930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.529 [2024-12-07 00:57:17.599936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ea80) on tqpair=0x14e2d80 00:30:01.529 [2024-12-07 00:57:17.599951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.529 [2024-12-07 00:57:17.599975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.529 [2024-12-07 00:57:17.599982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.529 [2024-12-07 00:57:17.599988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ed80) on tqpair=0x14e2d80 00:30:01.529 [2024-12-07 00:57:17.600007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.529 [2024-12-07 00:57:17.600018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.529 [2024-12-07 00:57:17.600038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.529 [2024-12-07 00:57:17.600045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ef00) on tqpair=0x14e2d80 00:30:01.529 ===================================================== 00:30:01.529 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.529 ===================================================== 00:30:01.529 Controller Capabilities/Features 00:30:01.529 ================================ 00:30:01.529 Vendor ID: 8086 00:30:01.529 Subsystem Vendor ID: 8086 00:30:01.529 Serial Number: SPDK00000000000001 00:30:01.529 Model Number: SPDK bdev Controller 00:30:01.529 Firmware Version: 25.01 00:30:01.529 Recommended Arb Burst: 6 00:30:01.529 IEEE OUI Identifier: e4 d2 5c 00:30:01.529 Multi-path I/O 00:30:01.529 May have multiple subsystem ports: Yes 00:30:01.529 May have multiple controllers: Yes 00:30:01.529 Associated with SR-IOV VF: No 00:30:01.529 Max Data Transfer Size: 131072 00:30:01.529 Max Number of Namespaces: 32 00:30:01.529 Max Number of I/O Queues: 127 00:30:01.529 NVMe Specification Version (VS): 1.3 00:30:01.529 NVMe Specification Version (Identify): 1.3 00:30:01.529 Maximum Queue Entries: 128 00:30:01.529 Contiguous Queues Required: Yes 00:30:01.529 Arbitration Mechanisms Supported 00:30:01.529 Weighted Round Robin: Not Supported 00:30:01.529 Vendor Specific: Not Supported 00:30:01.529 Reset Timeout: 15000 ms 00:30:01.529 Doorbell Stride: 4 bytes 00:30:01.529 NVM Subsystem Reset: Not Supported 00:30:01.529 Command Sets Supported 00:30:01.529 NVM Command Set: Supported 00:30:01.529 Boot Partition: Not Supported 00:30:01.529 Memory Page Size Minimum: 4096 bytes 00:30:01.529 Memory Page Size Maximum: 4096 bytes 00:30:01.529 Persistent Memory Region: Not Supported 00:30:01.529 Optional Asynchronous Events Supported 00:30:01.529 Namespace Attribute Notices: Supported 00:30:01.529 Firmware Activation Notices: Not Supported 00:30:01.529 ANA Change Notices: Not Supported 00:30:01.529 PLE Aggregate Log Change Notices: Not Supported 00:30:01.529 LBA Status Info Alert Notices: Not Supported 00:30:01.529 EGE Aggregate Log Change Notices: Not Supported 00:30:01.529 Normal NVM Subsystem Shutdown event: Not Supported 00:30:01.529 Zone Descriptor Change Notices: Not Supported 00:30:01.529 Discovery Log Change Notices: Not Supported 00:30:01.529 Controller Attributes 00:30:01.529 128-bit Host Identifier: Supported 00:30:01.529 Non-Operational Permissive Mode: Not Supported 00:30:01.529 NVM Sets: Not Supported 00:30:01.529 Read Recovery Levels: Not Supported 00:30:01.529 Endurance Groups: Not Supported 00:30:01.529 Predictable Latency Mode: Not Supported 00:30:01.529 Traffic Based Keep ALive: Not Supported 00:30:01.529 Namespace Granularity: Not Supported 00:30:01.529 SQ Associations: Not Supported 00:30:01.529 UUID List: Not Supported 00:30:01.529 Multi-Domain Subsystem: Not Supported 00:30:01.529 Fixed Capacity Management: Not Supported 00:30:01.529 Variable Capacity Management: Not Supported 00:30:01.529 Delete Endurance Group: Not Supported 00:30:01.529 Delete NVM Set: Not Supported 00:30:01.529 Extended LBA Formats Supported: Not Supported 00:30:01.529 Flexible Data Placement Supported: Not Supported 00:30:01.529 00:30:01.529 Controller Memory Buffer Support 00:30:01.529 ================================ 00:30:01.529 Supported: No 00:30:01.529 00:30:01.529 Persistent Memory Region Support 00:30:01.529 ================================ 00:30:01.529 Supported: No 00:30:01.529 00:30:01.529 Admin Command Set Attributes 00:30:01.529 ============================ 00:30:01.529 Security Send/Receive: Not Supported 00:30:01.529 Format NVM: Not Supported 00:30:01.529 Firmware Activate/Download: Not Supported 00:30:01.529 Namespace Management: Not Supported 00:30:01.529 Device Self-Test: Not Supported 00:30:01.529 Directives: Not Supported 00:30:01.529 NVMe-MI: Not Supported 00:30:01.529 Virtualization Management: Not Supported 00:30:01.529 Doorbell Buffer Config: Not Supported 00:30:01.529 Get LBA Status Capability: Not Supported 00:30:01.529 Command & Feature Lockdown Capability: Not Supported 00:30:01.529 Abort Command Limit: 4 00:30:01.529 Async Event Request Limit: 4 00:30:01.529 Number of Firmware Slots: N/A 00:30:01.529 Firmware Slot 1 Read-Only: N/A 00:30:01.529 Firmware Activation Without Reset: N/A 00:30:01.529 Multiple Update Detection Support: N/A 00:30:01.529 Firmware Update Granularity: No Information Provided 00:30:01.529 Per-Namespace SMART Log: No 00:30:01.529 Asymmetric Namespace Access Log Page: Not Supported 00:30:01.529 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:01.529 Command Effects Log Page: Supported 00:30:01.529 Get Log Page Extended Data: Supported 00:30:01.529 Telemetry Log Pages: Not Supported 00:30:01.529 Persistent Event Log Pages: Not Supported 00:30:01.529 Supported Log Pages Log Page: May Support 00:30:01.529 Commands Supported & Effects Log Page: Not Supported 00:30:01.529 Feature Identifiers & Effects Log Page:May Support 00:30:01.529 NVMe-MI Commands & Effects Log Page: May Support 00:30:01.529 Data Area 4 for Telemetry Log: Not Supported 00:30:01.529 Error Log Page Entries Supported: 128 00:30:01.529 Keep Alive: Supported 00:30:01.529 Keep Alive Granularity: 10000 ms 00:30:01.529 00:30:01.529 NVM Command Set Attributes 00:30:01.529 ========================== 00:30:01.529 Submission Queue Entry Size 00:30:01.529 Max: 64 00:30:01.529 Min: 64 00:30:01.529 Completion Queue Entry Size 00:30:01.529 Max: 16 00:30:01.529 Min: 16 00:30:01.529 Number of Namespaces: 32 00:30:01.529 Compare Command: Supported 00:30:01.529 Write Uncorrectable Command: Not Supported 00:30:01.529 Dataset Management Command: Supported 00:30:01.529 Write Zeroes Command: Supported 00:30:01.529 Set Features Save Field: Not Supported 00:30:01.529 Reservations: Supported 00:30:01.529 Timestamp: Not Supported 00:30:01.529 Copy: Supported 00:30:01.529 Volatile Write Cache: Present 00:30:01.529 Atomic Write Unit (Normal): 1 00:30:01.529 Atomic Write Unit (PFail): 1 00:30:01.529 Atomic Compare & Write Unit: 1 00:30:01.529 Fused Compare & Write: Supported 00:30:01.529 Scatter-Gather List 00:30:01.529 SGL Command Set: Supported 00:30:01.529 SGL Keyed: Supported 00:30:01.529 SGL Bit Bucket Descriptor: Not Supported 00:30:01.529 SGL Metadata Pointer: Not Supported 00:30:01.529 Oversized SGL: Not Supported 00:30:01.529 SGL Metadata Address: Not Supported 00:30:01.529 SGL Offset: Supported 00:30:01.529 Transport SGL Data Block: Not Supported 00:30:01.529 Replay Protected Memory Block: Not Supported 00:30:01.529 00:30:01.529 Firmware Slot Information 00:30:01.529 ========================= 00:30:01.529 Active slot: 1 00:30:01.529 Slot 1 Firmware Revision: 25.01 00:30:01.529 00:30:01.529 00:30:01.529 Commands Supported and Effects 00:30:01.529 ============================== 00:30:01.529 Admin Commands 00:30:01.529 -------------- 00:30:01.529 Get Log Page (02h): Supported 00:30:01.529 Identify (06h): Supported 00:30:01.529 Abort (08h): Supported 00:30:01.529 Set Features (09h): Supported 00:30:01.529 Get Features (0Ah): Supported 00:30:01.529 Asynchronous Event Request (0Ch): Supported 00:30:01.529 Keep Alive (18h): Supported 00:30:01.529 I/O Commands 00:30:01.529 ------------ 00:30:01.529 Flush (00h): Supported LBA-Change 00:30:01.529 Write (01h): Supported LBA-Change 00:30:01.529 Read (02h): Supported 00:30:01.529 Compare (05h): Supported 00:30:01.529 Write Zeroes (08h): Supported LBA-Change 00:30:01.529 Dataset Management (09h): Supported LBA-Change 00:30:01.529 Copy (19h): Supported LBA-Change 00:30:01.529 00:30:01.529 Error Log 00:30:01.529 ========= 00:30:01.529 00:30:01.529 Arbitration 00:30:01.529 =========== 00:30:01.529 Arbitration Burst: 1 00:30:01.529 00:30:01.529 Power Management 00:30:01.530 ================ 00:30:01.530 Number of Power States: 1 00:30:01.530 Current Power State: Power State #0 00:30:01.530 Power State #0: 00:30:01.530 Max Power: 0.00 W 00:30:01.530 Non-Operational State: Operational 00:30:01.530 Entry Latency: Not Reported 00:30:01.530 Exit Latency: Not Reported 00:30:01.530 Relative Read Throughput: 0 00:30:01.530 Relative Read Latency: 0 00:30:01.530 Relative Write Throughput: 0 00:30:01.530 Relative Write Latency: 0 00:30:01.530 Idle Power: Not Reported 00:30:01.530 Active Power: Not Reported 00:30:01.530 Non-Operational Permissive Mode: Not Supported 00:30:01.530 00:30:01.530 Health Information 00:30:01.530 ================== 00:30:01.530 Critical Warnings: 00:30:01.530 Available Spare Space: OK 00:30:01.530 Temperature: OK 00:30:01.530 Device Reliability: OK 00:30:01.530 Read Only: No 00:30:01.530 Volatile Memory Backup: OK 00:30:01.530 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:01.530 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:01.530 Available Spare: 0% 00:30:01.530 Available Spare Threshold: 0% 00:30:01.530 Life Percentage Used:[2024-12-07 00:57:17.600157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.600180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.600202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154ef00, cid 7, qid 0 00:30:01.530 [2024-12-07 00:57:17.600350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.600366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.530 [2024-12-07 00:57:17.600373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154ef00) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600430] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:01.530 [2024-12-07 00:57:17.600453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e480) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.530 [2024-12-07 00:57:17.600474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e600) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.530 [2024-12-07 00:57:17.600493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e780) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.530 [2024-12-07 00:57:17.600525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.530 [2024-12-07 00:57:17.600546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.600574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.600611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.530 [2024-12-07 00:57:17.600755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.600771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.530 [2024-12-07 00:57:17.600778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.600799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.600824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.600852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.530 [2024-12-07 00:57:17.600959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.600974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.530 [2024-12-07 00:57:17.600981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.600987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.601005] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:01.530 [2024-12-07 00:57:17.601015] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:01.530 [2024-12-07 00:57:17.601032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.601062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.601084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.530 [2024-12-07 00:57:17.601214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.601229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.530 [2024-12-07 00:57:17.601236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.601261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.601289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.601310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.530 [2024-12-07 00:57:17.601441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.601457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.530 [2024-12-07 00:57:17.601463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.530 [2024-12-07 00:57:17.601488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.530 [2024-12-07 00:57:17.601511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.530 [2024-12-07 00:57:17.601521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.530 [2024-12-07 00:57:17.601543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.530 [2024-12-07 00:57:17.601675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.530 [2024-12-07 00:57:17.601690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.531 [2024-12-07 00:57:17.601697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.531 [2024-12-07 00:57:17.601722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.531 [2024-12-07 00:57:17.601750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.531 [2024-12-07 00:57:17.601771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.531 [2024-12-07 00:57:17.601862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.531 [2024-12-07 00:57:17.601877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.531 [2024-12-07 00:57:17.601884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.531 [2024-12-07 00:57:17.601912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.601928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.531 [2024-12-07 00:57:17.601939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.531 [2024-12-07 00:57:17.601963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.531 [2024-12-07 00:57:17.606006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.531 [2024-12-07 00:57:17.606024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.531 [2024-12-07 00:57:17.606031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.606053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.531 [2024-12-07 00:57:17.606074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.606085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.606092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14e2d80) 00:30:01.531 [2024-12-07 00:57:17.606103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.531 [2024-12-07 00:57:17.606126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x154e900, cid 3, qid 0 00:30:01.531 [2024-12-07 00:57:17.606255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:01.531 [2024-12-07 00:57:17.606271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:01.531 [2024-12-07 00:57:17.606278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:01.531 [2024-12-07 00:57:17.606285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x154e900) on tqpair=0x14e2d80 00:30:01.531 [2024-12-07 00:57:17.606300] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:30:01.531 0% 00:30:01.531 Data Units Read: 0 00:30:01.531 Data Units Written: 0 00:30:01.531 Host Read Commands: 0 00:30:01.531 Host Write Commands: 0 00:30:01.531 Controller Busy Time: 0 minutes 00:30:01.531 Power Cycles: 0 00:30:01.531 Power On Hours: 0 hours 00:30:01.531 Unsafe Shutdowns: 0 00:30:01.531 Unrecoverable Media Errors: 0 00:30:01.531 Lifetime Error Log Entries: 0 00:30:01.531 Warning Temperature Time: 0 minutes 00:30:01.531 Critical Temperature Time: 0 minutes 00:30:01.531 00:30:01.531 Number of Queues 00:30:01.531 ================ 00:30:01.531 Number of I/O Submission Queues: 127 00:30:01.531 Number of I/O Completion Queues: 127 00:30:01.531 00:30:01.531 Active Namespaces 00:30:01.531 ================= 00:30:01.531 Namespace ID:1 00:30:01.531 Error Recovery Timeout: Unlimited 00:30:01.531 Command Set Identifier: NVM (00h) 00:30:01.531 Deallocate: Supported 00:30:01.531 Deallocated/Unwritten Error: Not Supported 00:30:01.531 Deallocated Read Value: Unknown 00:30:01.531 Deallocate in Write Zeroes: Not Supported 00:30:01.531 Deallocated Guard Field: 0xFFFF 00:30:01.531 Flush: Supported 00:30:01.531 Reservation: Supported 00:30:01.531 Namespace Sharing Capabilities: Multiple Controllers 00:30:01.531 Size (in LBAs): 131072 (0GiB) 00:30:01.531 Capacity (in LBAs): 131072 (0GiB) 00:30:01.531 Utilization (in LBAs): 131072 (0GiB) 00:30:01.531 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:01.531 EUI64: ABCDEF0123456789 00:30:01.531 UUID: bf16a9d6-8896-41f1-b15c-db0e63c591fa 00:30:01.531 Thin Provisioning: Not Supported 00:30:01.531 Per-NS Atomic Units: Yes 00:30:01.531 Atomic Boundary Size (Normal): 0 00:30:01.531 Atomic Boundary Size (PFail): 0 00:30:01.531 Atomic Boundary Offset: 0 00:30:01.531 Maximum Single Source Range Length: 65535 00:30:01.531 Maximum Copy Length: 65535 00:30:01.531 Maximum Source Range Count: 1 00:30:01.531 NGUID/EUI64 Never Reused: No 00:30:01.531 Namespace Write Protected: No 00:30:01.531 Number of LBA Formats: 1 00:30:01.531 Current LBA Format: LBA Format #00 00:30:01.531 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:01.531 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.531 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.531 rmmod nvme_tcp 00:30:01.531 rmmod nvme_fabrics 00:30:01.531 rmmod nvme_keyring 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 350814 ']' 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 350814 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 350814 ']' 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 350814 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350814 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350814' 00:30:01.790 killing process with pid 350814 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 350814 00:30:01.790 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 350814 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.048 00:57:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.974 00:57:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.974 00:30:03.974 real 0m5.723s 00:30:03.974 user 0m4.906s 00:30:03.974 sys 0m2.063s 00:30:03.974 00:57:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.974 00:57:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.974 ************************************ 00:30:03.974 END TEST nvmf_identify 00:30:03.974 ************************************ 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.974 ************************************ 00:30:03.974 START TEST nvmf_perf 00:30:03.974 ************************************ 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:03.974 * Looking for test storage... 00:30:03.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:03.974 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.233 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:04.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.234 --rc genhtml_branch_coverage=1 00:30:04.234 --rc genhtml_function_coverage=1 00:30:04.234 --rc genhtml_legend=1 00:30:04.234 --rc geninfo_all_blocks=1 00:30:04.234 --rc geninfo_unexecuted_blocks=1 00:30:04.234 00:30:04.234 ' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:04.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.234 --rc genhtml_branch_coverage=1 00:30:04.234 --rc genhtml_function_coverage=1 00:30:04.234 --rc genhtml_legend=1 00:30:04.234 --rc geninfo_all_blocks=1 00:30:04.234 --rc geninfo_unexecuted_blocks=1 00:30:04.234 00:30:04.234 ' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:04.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.234 --rc genhtml_branch_coverage=1 00:30:04.234 --rc genhtml_function_coverage=1 00:30:04.234 --rc genhtml_legend=1 00:30:04.234 --rc geninfo_all_blocks=1 00:30:04.234 --rc geninfo_unexecuted_blocks=1 00:30:04.234 00:30:04.234 ' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:04.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.234 --rc genhtml_branch_coverage=1 00:30:04.234 --rc genhtml_function_coverage=1 00:30:04.234 --rc genhtml_legend=1 00:30:04.234 --rc geninfo_all_blocks=1 00:30:04.234 --rc geninfo_unexecuted_blocks=1 00:30:04.234 00:30:04.234 ' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:04.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.234 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.235 00:57:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.769 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:06.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:06.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:06.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:06.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:30:06.770 00:30:06.770 --- 10.0.0.2 ping statistics --- 00:30:06.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.770 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:30:06.770 00:30:06.770 --- 10.0.0.1 ping statistics --- 00:30:06.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.770 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=352903 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 352903 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 352903 ']' 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.770 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.770 [2024-12-07 00:57:22.660302] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:06.770 [2024-12-07 00:57:22.660390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.770 [2024-12-07 00:57:22.737499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.770 [2024-12-07 00:57:22.786660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.770 [2024-12-07 00:57:22.786716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.770 [2024-12-07 00:57:22.786729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.770 [2024-12-07 00:57:22.786740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.770 [2024-12-07 00:57:22.786748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.771 [2024-12-07 00:57:22.788380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.771 [2024-12-07 00:57:22.788444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.771 [2024-12-07 00:57:22.788511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.771 [2024-12-07 00:57:22.788514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.771 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.771 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:06.771 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.771 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.771 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:07.030 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.030 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:07.030 00:57:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:10.321 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:10.322 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:10.322 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:10.322 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:10.579 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:10.579 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:10.579 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:10.579 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:10.579 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:10.839 [2024-12-07 00:57:26.942202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.839 00:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.098 00:57:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:11.098 00:57:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.354 00:57:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:11.354 00:57:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:11.917 00:57:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.917 [2024-12-07 00:57:28.022137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.917 00:57:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.174 00:57:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:12.174 00:57:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:12.174 00:57:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:12.174 00:57:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:13.549 Initializing NVMe Controllers 00:30:13.549 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:13.549 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:13.549 Initialization complete. Launching workers. 00:30:13.549 ======================================================== 00:30:13.549 Latency(us) 00:30:13.549 Device Information : IOPS MiB/s Average min max 00:30:13.549 PCIE (0000:88:00.0) NSID 1 from core 0: 84825.83 331.35 376.76 43.26 6253.22 00:30:13.549 ======================================================== 00:30:13.549 Total : 84825.83 331.35 376.76 43.26 6253.22 00:30:13.549 00:30:13.549 00:57:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.926 Initializing NVMe Controllers 00:30:14.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.926 Initialization complete. Launching workers. 00:30:14.927 ======================================================== 00:30:14.927 Latency(us) 00:30:14.927 Device Information : IOPS MiB/s Average min max 00:30:14.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11775.84 135.59 44837.23 00:30:14.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17937.42 7943.55 47929.29 00:30:14.927 ======================================================== 00:30:14.927 Total : 143.00 0.56 14188.77 135.59 47929.29 00:30:14.927 00:30:14.927 00:57:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.302 Initializing NVMe Controllers 00:30:16.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:16.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:16.302 Initialization complete. Launching workers. 00:30:16.302 ======================================================== 00:30:16.302 Latency(us) 00:30:16.302 Device Information : IOPS MiB/s Average min max 00:30:16.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8411.93 32.86 3804.77 733.06 10336.57 00:30:16.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.73 15.17 8274.85 4993.76 50655.41 00:30:16.302 ======================================================== 00:30:16.302 Total : 12295.67 48.03 5216.69 733.06 50655.41 00:30:16.302 00:30:16.302 00:57:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:16.302 00:57:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:16.302 00:57:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.831 Initializing NVMe Controllers 00:30:18.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.831 Controller IO queue size 128, less than required. 00:30:18.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.831 Controller IO queue size 128, less than required. 00:30:18.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:18.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:18.831 Initialization complete. Launching workers. 00:30:18.831 ======================================================== 00:30:18.831 Latency(us) 00:30:18.831 Device Information : IOPS MiB/s Average min max 00:30:18.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1748.47 437.12 74808.26 53515.78 131584.26 00:30:18.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 576.99 144.25 225371.77 81632.88 344746.83 00:30:18.831 ======================================================== 00:30:18.831 Total : 2325.46 581.36 112165.88 53515.78 344746.83 00:30:18.831 00:30:18.831 00:57:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:18.831 No valid NVMe controllers or AIO or URING devices found 00:30:18.831 Initializing NVMe Controllers 00:30:18.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.831 Controller IO queue size 128, less than required. 00:30:18.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.831 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:18.831 Controller IO queue size 128, less than required. 00:30:18.831 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.831 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:18.831 WARNING: Some requested NVMe devices were skipped 00:30:18.831 00:57:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:21.361 Initializing NVMe Controllers 00:30:21.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.361 Controller IO queue size 128, less than required. 00:30:21.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.361 Controller IO queue size 128, less than required. 00:30:21.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:21.361 Initialization complete. Launching workers. 00:30:21.361 00:30:21.361 ==================== 00:30:21.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:21.361 TCP transport: 00:30:21.361 polls: 8629 00:30:21.361 idle_polls: 5547 00:30:21.361 sock_completions: 3082 00:30:21.361 nvme_completions: 6021 00:30:21.361 submitted_requests: 9092 00:30:21.361 queued_requests: 1 00:30:21.361 00:30:21.361 ==================== 00:30:21.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:21.361 TCP transport: 00:30:21.361 polls: 11876 00:30:21.361 idle_polls: 8502 00:30:21.361 sock_completions: 3374 00:30:21.361 nvme_completions: 6403 00:30:21.361 submitted_requests: 9650 00:30:21.361 queued_requests: 1 00:30:21.361 ======================================================== 00:30:21.361 Latency(us) 00:30:21.361 Device Information : IOPS MiB/s Average min max 00:30:21.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1501.93 375.48 86180.84 57852.50 162408.84 00:30:21.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1597.23 399.31 80780.54 45760.45 117447.08 00:30:21.361 ======================================================== 00:30:21.361 Total : 3099.16 774.79 83397.65 45760.45 162408.84 00:30:21.361 00:30:21.361 00:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:21.361 00:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.618 00:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:21.618 00:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:21.618 00:57:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9d22aa81-d7cd-431f-9241-2c06233bfd56 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9d22aa81-d7cd-431f-9241-2c06233bfd56 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=9d22aa81-d7cd-431f-9241-2c06233bfd56 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:24.899 00:57:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:25.466 { 00:30:25.466 "uuid": "9d22aa81-d7cd-431f-9241-2c06233bfd56", 00:30:25.466 "name": "lvs_0", 00:30:25.466 "base_bdev": "Nvme0n1", 00:30:25.466 "total_data_clusters": 238234, 00:30:25.466 "free_clusters": 238234, 00:30:25.466 "block_size": 512, 00:30:25.466 "cluster_size": 4194304 00:30:25.466 } 00:30:25.466 ]' 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9d22aa81-d7cd-431f-9241-2c06233bfd56") .free_clusters' 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9d22aa81-d7cd-431f-9241-2c06233bfd56") .cluster_size' 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:25.466 952936 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:25.466 00:57:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d22aa81-d7cd-431f-9241-2c06233bfd56 lbd_0 20480 00:30:26.035 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e389d4cc-49bb-48f2-a150-65e8b3720a02 00:30:26.035 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e389d4cc-49bb-48f2-a150-65e8b3720a02 lvs_n_0 00:30:26.971 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1c50c28a-dffd-4a67-9b59-e851cf83f3cc 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1c50c28a-dffd-4a67-9b59-e851cf83f3cc 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=1c50c28a-dffd-4a67-9b59-e851cf83f3cc 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:26.972 00:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:27.230 { 00:30:27.230 "uuid": "9d22aa81-d7cd-431f-9241-2c06233bfd56", 00:30:27.230 "name": "lvs_0", 00:30:27.230 "base_bdev": "Nvme0n1", 00:30:27.230 "total_data_clusters": 238234, 00:30:27.230 "free_clusters": 233114, 00:30:27.230 "block_size": 512, 00:30:27.230 "cluster_size": 4194304 00:30:27.230 }, 00:30:27.230 { 00:30:27.230 "uuid": "1c50c28a-dffd-4a67-9b59-e851cf83f3cc", 00:30:27.230 "name": "lvs_n_0", 00:30:27.230 "base_bdev": "e389d4cc-49bb-48f2-a150-65e8b3720a02", 00:30:27.230 "total_data_clusters": 5114, 00:30:27.230 "free_clusters": 5114, 00:30:27.230 "block_size": 512, 00:30:27.230 "cluster_size": 4194304 00:30:27.230 } 00:30:27.230 ]' 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1c50c28a-dffd-4a67-9b59-e851cf83f3cc") .free_clusters' 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1c50c28a-dffd-4a67-9b59-e851cf83f3cc") .cluster_size' 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:27.230 20456 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:27.230 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c50c28a-dffd-4a67-9b59-e851cf83f3cc lbd_nest_0 20456 00:30:27.489 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=05497e73-2a22-4737-a8c9-c827f87764ac 00:30:27.489 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.056 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:28.056 00:57:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 05497e73-2a22-4737-a8c9-c827f87764ac 00:30:28.056 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.314 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:28.314 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:28.314 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:28.314 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:28.314 00:57:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.527 Initializing NVMe Controllers 00:30:40.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.527 Initialization complete. Launching workers. 00:30:40.527 ======================================================== 00:30:40.527 Latency(us) 00:30:40.527 Device Information : IOPS MiB/s Average min max 00:30:40.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.80 0.02 21434.24 169.46 45805.92 00:30:40.527 ======================================================== 00:30:40.527 Total : 46.80 0.02 21434.24 169.46 45805.92 00:30:40.527 00:30:40.527 00:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:40.527 00:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:50.508 Initializing NVMe Controllers 00:30:50.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:50.508 Initialization complete. Launching workers. 00:30:50.508 ======================================================== 00:30:50.508 Latency(us) 00:30:50.508 Device Information : IOPS MiB/s Average min max 00:30:50.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.28 8.91 14040.74 5023.98 53871.66 00:30:50.508 ======================================================== 00:30:50.508 Total : 71.28 8.91 14040.74 5023.98 53871.66 00:30:50.508 00:30:50.508 00:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:50.508 00:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:50.508 00:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.491 Initializing NVMe Controllers 00:31:00.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.491 Initialization complete. Launching workers. 00:31:00.491 ======================================================== 00:31:00.491 Latency(us) 00:31:00.491 Device Information : IOPS MiB/s Average min max 00:31:00.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7405.20 3.62 4321.46 281.71 8961.79 00:31:00.492 ======================================================== 00:31:00.492 Total : 7405.20 3.62 4321.46 281.71 8961.79 00:31:00.492 00:31:00.492 00:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:00.492 00:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.482 Initializing NVMe Controllers 00:31:10.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.482 Initialization complete. Launching workers. 00:31:10.482 ======================================================== 00:31:10.482 Latency(us) 00:31:10.482 Device Information : IOPS MiB/s Average min max 00:31:10.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3917.50 489.69 8172.10 766.08 18642.27 00:31:10.482 ======================================================== 00:31:10.482 Total : 3917.50 489.69 8172.10 766.08 18642.27 00:31:10.482 00:31:10.482 00:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:10.482 00:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.482 00:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.468 Initializing NVMe Controllers 00:31:20.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.468 Controller IO queue size 128, less than required. 00:31:20.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.468 Initialization complete. Launching workers. 00:31:20.468 ======================================================== 00:31:20.468 Latency(us) 00:31:20.468 Device Information : IOPS MiB/s Average min max 00:31:20.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11775.28 5.75 10877.29 1860.45 48619.39 00:31:20.468 ======================================================== 00:31:20.468 Total : 11775.28 5.75 10877.29 1860.45 48619.39 00:31:20.468 00:31:20.468 00:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.468 00:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.672 Initializing NVMe Controllers 00:31:32.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.672 Controller IO queue size 128, less than required. 00:31:32.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:32.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.672 Initialization complete. Launching workers. 00:31:32.672 ======================================================== 00:31:32.672 Latency(us) 00:31:32.672 Device Information : IOPS MiB/s Average min max 00:31:32.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.60 148.95 108080.11 23889.12 224344.83 00:31:32.672 ======================================================== 00:31:32.672 Total : 1191.60 148.95 108080.11 23889.12 224344.83 00:31:32.672 00:31:32.672 00:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.672 00:58:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05497e73-2a22-4737-a8c9-c827f87764ac 00:31:32.672 00:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:32.672 00:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e389d4cc-49bb-48f2-a150-65e8b3720a02 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.672 rmmod nvme_tcp 00:31:32.672 rmmod nvme_fabrics 00:31:32.672 rmmod nvme_keyring 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 352903 ']' 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 352903 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 352903 ']' 00:31:32.672 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 352903 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352903 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352903' 00:31:32.673 killing process with pid 352903 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 352903 00:31:32.673 00:58:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 352903 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.576 00:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:36.485 00:31:36.485 real 1m32.239s 00:31:36.485 user 5m41.827s 00:31:36.485 sys 0m15.551s 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:36.485 ************************************ 00:31:36.485 END TEST nvmf_perf 00:31:36.485 ************************************ 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:36.485 00:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.486 ************************************ 00:31:36.486 START TEST nvmf_fio_host 00:31:36.486 ************************************ 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:36.486 * Looking for test storage... 00:31:36.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.486 --rc genhtml_branch_coverage=1 00:31:36.486 --rc genhtml_function_coverage=1 00:31:36.486 --rc genhtml_legend=1 00:31:36.486 --rc geninfo_all_blocks=1 00:31:36.486 --rc geninfo_unexecuted_blocks=1 00:31:36.486 00:31:36.486 ' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.486 --rc genhtml_branch_coverage=1 00:31:36.486 --rc genhtml_function_coverage=1 00:31:36.486 --rc genhtml_legend=1 00:31:36.486 --rc geninfo_all_blocks=1 00:31:36.486 --rc geninfo_unexecuted_blocks=1 00:31:36.486 00:31:36.486 ' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.486 --rc genhtml_branch_coverage=1 00:31:36.486 --rc genhtml_function_coverage=1 00:31:36.486 --rc genhtml_legend=1 00:31:36.486 --rc geninfo_all_blocks=1 00:31:36.486 --rc geninfo_unexecuted_blocks=1 00:31:36.486 00:31:36.486 ' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:36.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.486 --rc genhtml_branch_coverage=1 00:31:36.486 --rc genhtml_function_coverage=1 00:31:36.486 --rc genhtml_legend=1 00:31:36.486 --rc geninfo_all_blocks=1 00:31:36.486 --rc geninfo_unexecuted_blocks=1 00:31:36.486 00:31:36.486 ' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.486 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:36.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:36.487 00:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:38.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:38.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:38.396 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:38.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:38.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.397 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.655 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:31:38.656 00:31:38.656 --- 10.0.0.2 ping statistics --- 00:31:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.656 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:31:38.656 00:31:38.656 --- 10.0.0.1 ping statistics --- 00:31:38.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.656 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=365014 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 365014 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 365014 ']' 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.656 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 [2024-12-07 00:58:54.727542] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:38.656 [2024-12-07 00:58:54.727632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.656 [2024-12-07 00:58:54.801875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.915 [2024-12-07 00:58:54.850404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.915 [2024-12-07 00:58:54.850460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.915 [2024-12-07 00:58:54.850473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.915 [2024-12-07 00:58:54.850484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.915 [2024-12-07 00:58:54.850494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.915 [2024-12-07 00:58:54.852197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.915 [2024-12-07 00:58:54.852250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.915 [2024-12-07 00:58:54.852304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.915 [2024-12-07 00:58:54.852307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.915 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.915 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:38.915 00:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:39.183 [2024-12-07 00:58:55.282231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.183 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:39.183 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.183 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.444 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:39.702 Malloc1 00:31:39.702 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:39.959 00:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:40.217 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.474 [2024-12-07 00:58:56.474710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.474 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:40.733 00:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.993 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:40.993 fio-3.35 00:31:40.993 Starting 1 thread 00:31:43.527 00:31:43.527 test: (groupid=0, jobs=1): err= 0: pid=365376: Sat Dec 7 00:58:59 2024 00:31:43.527 read: IOPS=8726, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2007msec) 00:31:43.527 slat (nsec): min=1918, max=163529, avg=2551.01, stdev=1942.18 00:31:43.527 clat (usec): min=2260, max=14163, avg=7985.26, stdev=665.83 00:31:43.527 lat (usec): min=2292, max=14165, avg=7987.81, stdev=665.69 00:31:43.527 clat percentiles (usec): 00:31:43.527 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7439], 00:31:43.527 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:31:43.527 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8979], 00:31:43.527 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11207], 99.95th=[12256], 00:31:43.527 | 99.99th=[13435] 00:31:43.527 bw ( KiB/s): min=33952, max=35608, per=100.00%, avg=34908.00, stdev=703.65, samples=4 00:31:43.527 iops : min= 8488, max= 8902, avg=8727.00, stdev=175.91, samples=4 00:31:43.527 write: IOPS=8725, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2007msec); 0 zone resets 00:31:43.527 slat (usec): min=2, max=136, avg= 2.72, stdev= 1.54 00:31:43.527 clat (usec): min=1712, max=13256, avg=6626.79, stdev=560.02 00:31:43.527 lat (usec): min=1721, max=13259, avg=6629.51, stdev=559.95 00:31:43.527 clat percentiles (usec): 00:31:43.527 | 1.00th=[ 5342], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:31:43.527 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:31:43.527 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:31:43.527 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[11076], 99.95th=[11338], 00:31:43.527 | 99.99th=[13304] 00:31:43.527 bw ( KiB/s): min=34712, max=35048, per=99.97%, avg=34892.00, stdev=139.26, samples=4 00:31:43.527 iops : min= 8678, max= 8762, avg=8723.00, stdev=34.81, samples=4 00:31:43.527 lat (msec) : 2=0.02%, 4=0.11%, 10=99.67%, 20=0.20% 00:31:43.527 cpu : usr=63.01%, sys=35.39%, ctx=97, majf=0, minf=35 00:31:43.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:43.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.527 issued rwts: total=17514,17513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.527 00:31:43.527 Run status group 0 (all jobs): 00:31:43.528 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2007-2007msec 00:31:43.528 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2007-2007msec 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:43.528 00:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:43.528 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:43.528 fio-3.35 00:31:43.528 Starting 1 thread 00:31:46.060 00:31:46.060 test: (groupid=0, jobs=1): err= 0: pid=365706: Sat Dec 7 00:59:01 2024 00:31:46.060 read: IOPS=8241, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:31:46.060 slat (usec): min=2, max=113, avg= 3.63, stdev= 1.77 00:31:46.060 clat (usec): min=2436, max=15626, avg=8668.21, stdev=1861.34 00:31:46.060 lat (usec): min=2439, max=15629, avg=8671.84, stdev=1861.35 00:31:46.060 clat percentiles (usec): 00:31:46.060 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7111], 00:31:46.060 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:31:46.060 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11207], 95.00th=[11863], 00:31:46.060 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14484], 99.95th=[14484], 00:31:46.060 | 99.99th=[14746] 00:31:46.060 bw ( KiB/s): min=59680, max=81920, per=52.93%, avg=69800.00, stdev=9895.73, samples=4 00:31:46.060 iops : min= 3730, max= 5120, avg=4362.50, stdev=618.48, samples=4 00:31:46.060 write: IOPS=5032, BW=78.6MiB/s (82.4MB/s)(142MiB/1810msec); 0 zone resets 00:31:46.060 slat (usec): min=30, max=148, avg=33.61, stdev= 5.21 00:31:46.060 clat (usec): min=5113, max=18871, avg=11770.50, stdev=1964.06 00:31:46.060 lat (usec): min=5146, max=18920, avg=11804.11, stdev=1964.08 00:31:46.060 clat percentiles (usec): 00:31:46.060 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:31:46.060 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:31:46.060 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[15270], 00:31:46.060 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:31:46.060 | 99.99th=[18744] 00:31:46.060 bw ( KiB/s): min=63040, max=85024, per=90.16%, avg=72592.00, stdev=10226.47, samples=4 00:31:46.060 iops : min= 3940, max= 5314, avg=4537.00, stdev=639.15, samples=4 00:31:46.060 lat (msec) : 4=0.20%, 10=57.01%, 20=42.80% 00:31:46.060 cpu : usr=77.53%, sys=21.28%, ctx=39, majf=0, minf=63 00:31:46.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:46.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.060 issued rwts: total=16549,9108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.060 00:31:46.060 Run status group 0 (all jobs): 00:31:46.060 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2008-2008msec 00:31:46.060 WRITE: bw=78.6MiB/s (82.4MB/s), 78.6MiB/s-78.6MiB/s (82.4MB/s-82.4MB/s), io=142MiB (149MB), run=1810-1810msec 00:31:46.060 00:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:46.318 00:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:49.620 Nvme0n1 00:31:49.621 00:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=dbf35bd3-e27e-4b94-86a6-e9844692f654 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb dbf35bd3-e27e-4b94-86a6-e9844692f654 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=dbf35bd3-e27e-4b94-86a6-e9844692f654 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:52.382 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:52.661 { 00:31:52.661 "uuid": "dbf35bd3-e27e-4b94-86a6-e9844692f654", 00:31:52.661 "name": "lvs_0", 00:31:52.661 "base_bdev": "Nvme0n1", 00:31:52.661 "total_data_clusters": 930, 00:31:52.661 "free_clusters": 930, 00:31:52.661 "block_size": 512, 00:31:52.661 "cluster_size": 1073741824 00:31:52.661 } 00:31:52.661 ]' 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="dbf35bd3-e27e-4b94-86a6-e9844692f654") .free_clusters' 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="dbf35bd3-e27e-4b94-86a6-e9844692f654") .cluster_size' 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:52.661 952320 00:31:52.661 00:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:53.228 35a612e2-82b8-4642-8274-6cc918cdd86c 00:31:53.228 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:53.484 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:53.741 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.998 00:59:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.998 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:53.998 fio-3.35 00:31:53.998 Starting 1 thread 00:31:56.529 00:31:56.529 test: (groupid=0, jobs=1): err= 0: pid=367120: Sat Dec 7 00:59:12 2024 00:31:56.529 read: IOPS=5937, BW=23.2MiB/s (24.3MB/s)(46.6MiB/2008msec) 00:31:56.529 slat (nsec): min=1897, max=153478, avg=2525.17, stdev=2096.25 00:31:56.529 clat (usec): min=1206, max=171403, avg=11796.02, stdev=11699.93 00:31:56.529 lat (usec): min=1210, max=171442, avg=11798.55, stdev=11700.27 00:31:56.529 clat percentiles (msec): 00:31:56.529 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:56.529 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:56.529 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:56.529 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:56.529 | 99.99th=[ 171] 00:31:56.529 bw ( KiB/s): min=16880, max=26072, per=99.76%, avg=23692.00, stdev=4543.27, samples=4 00:31:56.529 iops : min= 4220, max= 6518, avg=5923.00, stdev=1135.82, samples=4 00:31:56.529 write: IOPS=5931, BW=23.2MiB/s (24.3MB/s)(46.5MiB/2008msec); 0 zone resets 00:31:56.529 slat (usec): min=2, max=118, avg= 2.70, stdev= 1.63 00:31:56.529 clat (usec): min=357, max=169431, avg=9654.32, stdev=10966.77 00:31:56.529 lat (usec): min=360, max=169438, avg=9657.02, stdev=10967.09 00:31:56.529 clat percentiles (msec): 00:31:56.529 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:56.529 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:56.529 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:56.529 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:56.529 | 99.99th=[ 169] 00:31:56.529 bw ( KiB/s): min=17896, max=25920, per=99.97%, avg=23718.00, stdev=3897.84, samples=4 00:31:56.529 iops : min= 4474, max= 6480, avg=5929.50, stdev=974.46, samples=4 00:31:56.529 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:56.529 lat (msec) : 2=0.03%, 4=0.10%, 10=54.72%, 20=44.59%, 250=0.54% 00:31:56.529 cpu : usr=61.34%, sys=37.37%, ctx=121, majf=0, minf=35 00:31:56.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:56.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:56.529 issued rwts: total=11922,11910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:56.529 00:31:56.529 Run status group 0 (all jobs): 00:31:56.529 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.6MiB (48.8MB), run=2008-2008msec 00:31:56.529 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2008-2008msec 00:31:56.529 00:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:56.787 00:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=efb571c6-0dad-433b-87a0-87bd37ef94e3 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb efb571c6-0dad-433b-87a0-87bd37ef94e3 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=efb571c6-0dad-433b-87a0-87bd37ef94e3 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:58.165 { 00:31:58.165 "uuid": "dbf35bd3-e27e-4b94-86a6-e9844692f654", 00:31:58.165 "name": "lvs_0", 00:31:58.165 "base_bdev": "Nvme0n1", 00:31:58.165 "total_data_clusters": 930, 00:31:58.165 "free_clusters": 0, 00:31:58.165 "block_size": 512, 00:31:58.165 "cluster_size": 1073741824 00:31:58.165 }, 00:31:58.165 { 00:31:58.165 "uuid": "efb571c6-0dad-433b-87a0-87bd37ef94e3", 00:31:58.165 "name": "lvs_n_0", 00:31:58.165 "base_bdev": "35a612e2-82b8-4642-8274-6cc918cdd86c", 00:31:58.165 "total_data_clusters": 237847, 00:31:58.165 "free_clusters": 237847, 00:31:58.165 "block_size": 512, 00:31:58.165 "cluster_size": 4194304 00:31:58.165 } 00:31:58.165 ]' 00:31:58.165 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="efb571c6-0dad-433b-87a0-87bd37ef94e3") .free_clusters' 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="efb571c6-0dad-433b-87a0-87bd37ef94e3") .cluster_size' 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:58.424 951388 00:31:58.424 00:59:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:58.992 2d025a7f-76e2-478f-99d6-6b3056095635 00:31:58.992 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:59.249 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:59.506 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:59.764 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:00.023 00:59:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:00.023 fio-3.35 00:32:00.023 Starting 1 thread 00:32:02.549 00:32:02.550 test: (groupid=0, jobs=1): err= 0: pid=367862: Sat Dec 7 00:59:18 2024 00:32:02.550 read: IOPS=5804, BW=22.7MiB/s (23.8MB/s)(45.5MiB/2008msec) 00:32:02.550 slat (nsec): min=1934, max=141331, avg=2539.57, stdev=2016.84 00:32:02.550 clat (usec): min=4337, max=19478, avg=12052.06, stdev=1103.81 00:32:02.550 lat (usec): min=4342, max=19480, avg=12054.60, stdev=1103.69 00:32:02.550 clat percentiles (usec): 00:32:02.550 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:32:02.550 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:32:02.550 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13698], 00:32:02.550 | 99.00th=[14484], 99.50th=[14746], 99.90th=[19006], 99.95th=[19268], 00:32:02.550 | 99.99th=[19530] 00:32:02.550 bw ( KiB/s): min=21848, max=23848, per=99.80%, avg=23172.00, stdev=901.00, samples=4 00:32:02.550 iops : min= 5462, max= 5962, avg=5793.00, stdev=225.25, samples=4 00:32:02.550 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2008msec); 0 zone resets 00:32:02.550 slat (usec): min=2, max=111, avg= 2.67, stdev= 1.45 00:32:02.550 clat (usec): min=2069, max=17677, avg=9846.46, stdev=899.17 00:32:02.550 lat (usec): min=2075, max=17679, avg=9849.13, stdev=899.12 00:32:02.550 clat percentiles (usec): 00:32:02.550 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:32:02.550 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:32:02.550 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:32:02.550 | 99.00th=[11731], 99.50th=[12125], 99.90th=[14746], 99.95th=[16057], 00:32:02.550 | 99.99th=[16319] 00:32:02.550 bw ( KiB/s): min=22928, max=23296, per=99.91%, avg=23140.00, stdev=158.59, samples=4 00:32:02.550 iops : min= 5732, max= 5824, avg=5785.00, stdev=39.65, samples=4 00:32:02.550 lat (msec) : 4=0.04%, 10=29.76%, 20=70.19% 00:32:02.550 cpu : usr=62.03%, sys=36.77%, ctx=113, majf=0, minf=35 00:32:02.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:02.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:02.550 issued rwts: total=11656,11627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:02.550 00:32:02.550 Run status group 0 (all jobs): 00:32:02.550 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2008-2008msec 00:32:02.550 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2008-2008msec 00:32:02.550 00:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:02.808 00:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:02.808 00:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:07.000 00:59:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:07.000 00:59:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:10.289 00:59:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:10.289 00:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.194 00:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.194 rmmod nvme_tcp 00:32:12.194 rmmod nvme_fabrics 00:32:12.194 rmmod nvme_keyring 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 365014 ']' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 365014 ']' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365014' 00:32:12.194 killing process with pid 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 365014 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.194 00:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.733 00:32:14.733 real 0m37.980s 00:32:14.733 user 2m26.175s 00:32:14.733 sys 0m7.023s 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.733 ************************************ 00:32:14.733 END TEST nvmf_fio_host 00:32:14.733 ************************************ 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.733 ************************************ 00:32:14.733 START TEST nvmf_failover 00:32:14.733 ************************************ 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:14.733 * Looking for test storage... 00:32:14.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.733 --rc genhtml_branch_coverage=1 00:32:14.733 --rc genhtml_function_coverage=1 00:32:14.733 --rc genhtml_legend=1 00:32:14.733 --rc geninfo_all_blocks=1 00:32:14.733 --rc geninfo_unexecuted_blocks=1 00:32:14.733 00:32:14.733 ' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.733 --rc genhtml_branch_coverage=1 00:32:14.733 --rc genhtml_function_coverage=1 00:32:14.733 --rc genhtml_legend=1 00:32:14.733 --rc geninfo_all_blocks=1 00:32:14.733 --rc geninfo_unexecuted_blocks=1 00:32:14.733 00:32:14.733 ' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.733 --rc genhtml_branch_coverage=1 00:32:14.733 --rc genhtml_function_coverage=1 00:32:14.733 --rc genhtml_legend=1 00:32:14.733 --rc geninfo_all_blocks=1 00:32:14.733 --rc geninfo_unexecuted_blocks=1 00:32:14.733 00:32:14.733 ' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.733 --rc genhtml_branch_coverage=1 00:32:14.733 --rc genhtml_function_coverage=1 00:32:14.733 --rc genhtml_legend=1 00:32:14.733 --rc geninfo_all_blocks=1 00:32:14.733 --rc geninfo_unexecuted_blocks=1 00:32:14.733 00:32:14.733 ' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.733 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:14.734 00:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:16.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:16.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:16.640 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:16.640 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.640 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:32:16.900 00:32:16.900 --- 10.0.0.2 ping statistics --- 00:32:16.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.900 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:32:16.900 00:32:16.900 --- 10.0.0.1 ping statistics --- 00:32:16.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.900 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=371225 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 371225 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 371225 ']' 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.900 00:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.900 [2024-12-07 00:59:32.885364] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:32:16.900 [2024-12-07 00:59:32.885444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.900 [2024-12-07 00:59:32.960327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:16.900 [2024-12-07 00:59:33.008991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.900 [2024-12-07 00:59:33.009082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.900 [2024-12-07 00:59:33.009097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.900 [2024-12-07 00:59:33.009109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.900 [2024-12-07 00:59:33.009120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.900 [2024-12-07 00:59:33.010655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:16.900 [2024-12-07 00:59:33.010698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.900 [2024-12-07 00:59:33.010701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.158 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:17.415 [2024-12-07 00:59:33.411389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.415 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:17.673 Malloc0 00:32:17.673 00:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.930 00:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:18.188 00:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.446 [2024-12-07 00:59:34.522773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.446 00:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:18.703 [2024-12-07 00:59:34.787509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:18.703 00:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:18.961 [2024-12-07 00:59:35.072445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=371513 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 371513 /var/tmp/bdevperf.sock 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 371513 ']' 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.961 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:19.527 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:19.527 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:19.527 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:19.784 NVMe0n1 00:32:19.784 00:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.042 00:32:20.042 00:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=371616 00:32:20.042 00:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:20.042 00:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:20.980 00:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.238 00:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:24.521 00:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:24.779 00:32:24.779 00:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:25.036 00:59:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:28.328 00:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.328 [2024-12-07 00:59:44.379684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.328 00:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:29.262 00:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:29.520 [2024-12-07 00:59:45.651539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.520 [2024-12-07 00:59:45.651934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf300 is same with the state(6) to be set 00:32:29.778 00:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 371616 00:32:36.350 { 00:32:36.350 "results": [ 00:32:36.350 { 00:32:36.350 "job": "NVMe0n1", 00:32:36.350 "core_mask": "0x1", 00:32:36.350 "workload": "verify", 00:32:36.350 "status": "finished", 00:32:36.350 "verify_range": { 00:32:36.350 "start": 0, 00:32:36.350 "length": 16384 00:32:36.350 }, 00:32:36.350 "queue_depth": 128, 00:32:36.350 "io_size": 4096, 00:32:36.350 "runtime": 15.009632, 00:32:36.350 "iops": 8376.820964031629, 00:32:36.350 "mibps": 32.72195689074855, 00:32:36.350 "io_failed": 10188, 00:32:36.350 "io_timeout": 0, 00:32:36.350 "avg_latency_us": 14107.507944369647, 00:32:36.350 "min_latency_us": 540.0651851851852, 00:32:36.350 "max_latency_us": 16699.543703703705 00:32:36.350 } 00:32:36.350 ], 00:32:36.350 "core_count": 1 00:32:36.350 } 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 371513 ']' 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371513' 00:32:36.350 killing process with pid 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 371513 00:32:36.350 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:36.350 [2024-12-07 00:59:35.139930] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:32:36.350 [2024-12-07 00:59:35.140053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371513 ] 00:32:36.350 [2024-12-07 00:59:35.209631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.350 [2024-12-07 00:59:35.255757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.350 Running I/O for 15 seconds... 00:32:36.350 8485.00 IOPS, 33.14 MiB/s [2024-12-06T23:59:52.501Z] [2024-12-07 00:59:37.353471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.350 [2024-12-07 00:59:37.353531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.353954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.353967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.351 [2024-12-07 00:59:37.354777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.351 [2024-12-07 00:59:37.354792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.354957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.354970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.355966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.355989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.352 [2024-12-07 00:59:37.356044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.352 [2024-12-07 00:59:37.356060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.353 [2024-12-07 00:59:37.356075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.353 [2024-12-07 00:59:37.356105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.353 [2024-12-07 00:59:37.356134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.353 [2024-12-07 00:59:37.356165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.356963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.356978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.353 [2024-12-07 00:59:37.357323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.353 [2024-12-07 00:59:37.357353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.354 [2024-12-07 00:59:37.357700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.354 [2024-12-07 00:59:37.357711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 PRP1 0x0 PRP2 0x0 00:32:36.354 [2024-12-07 00:59:37.357723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357785] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:36.354 [2024-12-07 00:59:37.357839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.354 [2024-12-07 00:59:37.357859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.354 [2024-12-07 00:59:37.357889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.354 [2024-12-07 00:59:37.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.354 [2024-12-07 00:59:37.357944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:37.357962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:36.354 [2024-12-07 00:59:37.361319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:36.354 [2024-12-07 00:59:37.361358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75900 (9): Bad file descriptor 00:32:36.354 [2024-12-07 00:59:37.385078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:36.354 8429.00 IOPS, 32.93 MiB/s [2024-12-06T23:59:52.505Z] 8498.00 IOPS, 33.20 MiB/s [2024-12-06T23:59:52.505Z] 8501.50 IOPS, 33.21 MiB/s [2024-12-06T23:59:52.505Z] [2024-12-07 00:59:41.112658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.112945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.112961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.112991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.354 [2024-12-07 00:59:41.113192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.354 [2024-12-07 00:59:41.113369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.354 [2024-12-07 00:59:41.113384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.113958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.113972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.355 [2024-12-07 00:59:41.114057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.355 [2024-12-07 00:59:41.114086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.355 [2024-12-07 00:59:41.114437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.355 [2024-12-07 00:59:41.114451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.114810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.114972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.114986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.356 [2024-12-07 00:59:41.115269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.356 [2024-12-07 00:59:41.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.356 [2024-12-07 00:59:41.115657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.357 [2024-12-07 00:59:41.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.115978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.115992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.357 [2024-12-07 00:59:41.116683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.357 [2024-12-07 00:59:41.116730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.357 [2024-12-07 00:59:41.116742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:32:36.357 [2024-12-07 00:59:41.116755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116820] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:36.357 [2024-12-07 00:59:41.116858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.357 [2024-12-07 00:59:41.116877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.357 [2024-12-07 00:59:41.116907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.357 [2024-12-07 00:59:41.116934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.357 [2024-12-07 00:59:41.116962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.357 [2024-12-07 00:59:41.116975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:36.358 [2024-12-07 00:59:41.117024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75900 (9): Bad file descriptor 00:32:36.358 [2024-12-07 00:59:41.120291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:36.358 8356.20 IOPS, 32.64 MiB/s [2024-12-06T23:59:52.509Z] [2024-12-07 00:59:41.231896] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:36.358 8342.00 IOPS, 32.59 MiB/s [2024-12-06T23:59:52.509Z] 8359.86 IOPS, 32.66 MiB/s [2024-12-06T23:59:52.509Z] 8387.00 IOPS, 32.76 MiB/s [2024-12-06T23:59:52.509Z] 8397.33 IOPS, 32.80 MiB/s [2024-12-06T23:59:52.509Z] [2024-12-07 00:59:45.652415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.652957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.652971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.358 [2024-12-07 00:59:45.653597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.358 [2024-12-07 00:59:45.653612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.653966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.653987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.654042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.654073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.654108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.654137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.359 [2024-12-07 00:59:45.654167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.359 [2024-12-07 00:59:45.654728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.359 [2024-12-07 00:59:45.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.654964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.360 [2024-12-07 00:59:45.655973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.360 [2024-12-07 00:59:45.656000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.361 [2024-12-07 00:59:45.656218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24728 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24744 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24752 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24760 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.361 [2024-12-07 00:59:45.656628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.361 [2024-12-07 00:59:45.656639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:32:36.361 [2024-12-07 00:59:45.656652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656715] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:36.361 [2024-12-07 00:59:45.656767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.361 [2024-12-07 00:59:45.656787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.361 [2024-12-07 00:59:45.656823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.361 [2024-12-07 00:59:45.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.361 [2024-12-07 00:59:45.656880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.361 [2024-12-07 00:59:45.656894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:36.361 [2024-12-07 00:59:45.656948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75900 (9): Bad file descriptor 00:32:36.361 [2024-12-07 00:59:45.660269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:36.361 [2024-12-07 00:59:45.770781] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:36.361 8319.40 IOPS, 32.50 MiB/s [2024-12-06T23:59:52.512Z] 8339.82 IOPS, 32.58 MiB/s [2024-12-06T23:59:52.512Z] 8357.33 IOPS, 32.65 MiB/s [2024-12-06T23:59:52.512Z] 8372.31 IOPS, 32.70 MiB/s [2024-12-06T23:59:52.512Z] 8381.14 IOPS, 32.74 MiB/s 00:32:36.361 Latency(us) 00:32:36.361 [2024-12-06T23:59:52.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.361 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.361 Verification LBA range: start 0x0 length 0x4000 00:32:36.361 NVMe0n1 : 15.01 8376.82 32.72 678.76 0.00 14107.51 540.07 16699.54 00:32:36.361 [2024-12-06T23:59:52.512Z] =================================================================================================================== 00:32:36.361 [2024-12-06T23:59:52.512Z] Total : 8376.82 32.72 678.76 0.00 14107.51 540.07 16699.54 00:32:36.361 Received shutdown signal, test time was about 15.000000 seconds 00:32:36.361 00:32:36.361 Latency(us) 00:32:36.361 [2024-12-06T23:59:52.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.361 [2024-12-06T23:59:52.512Z] =================================================================================================================== 00:32:36.361 [2024-12-06T23:59:52.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=373371 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 373371 /var/tmp/bdevperf.sock 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 373371 ']' 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:36.361 00:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.361 [2024-12-07 00:59:52.035233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.361 00:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:36.361 [2024-12-07 00:59:52.299957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:36.362 00:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:36.619 NVMe0n1 00:32:36.619 00:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:36.877 00:32:37.134 00:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:37.391 00:32:37.391 00:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:37.391 00:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:37.649 00:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.907 00:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:41.192 00:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:41.192 00:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:41.192 00:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=374038 00:32:41.192 00:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:41.192 00:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 374038 00:32:42.571 { 00:32:42.571 "results": [ 00:32:42.571 { 00:32:42.571 "job": "NVMe0n1", 00:32:42.571 "core_mask": "0x1", 00:32:42.571 "workload": "verify", 00:32:42.571 "status": "finished", 00:32:42.571 "verify_range": { 00:32:42.571 "start": 0, 00:32:42.571 "length": 16384 00:32:42.571 }, 00:32:42.571 "queue_depth": 128, 00:32:42.571 "io_size": 4096, 00:32:42.571 "runtime": 1.010192, 00:32:42.571 "iops": 8499.374376356178, 00:32:42.571 "mibps": 33.20068115764132, 00:32:42.571 "io_failed": 0, 00:32:42.571 "io_timeout": 0, 00:32:42.571 "avg_latency_us": 14965.333256032647, 00:32:42.571 "min_latency_us": 1553.4459259259258, 00:32:42.571 "max_latency_us": 17476.266666666666 00:32:42.571 } 00:32:42.571 ], 00:32:42.571 "core_count": 1 00:32:42.571 } 00:32:42.571 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:42.571 [2024-12-07 00:59:51.514330] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:32:42.571 [2024-12-07 00:59:51.514435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373371 ] 00:32:42.571 [2024-12-07 00:59:51.585081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.571 [2024-12-07 00:59:51.633383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.571 [2024-12-07 00:59:53.918577] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:42.571 [2024-12-07 00:59:53.918669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.571 [2024-12-07 00:59:53.918693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.571 [2024-12-07 00:59:53.918711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.571 [2024-12-07 00:59:53.918725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.571 [2024-12-07 00:59:53.918756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.571 [2024-12-07 00:59:53.918771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.571 [2024-12-07 00:59:53.918787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.571 [2024-12-07 00:59:53.918801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.571 [2024-12-07 00:59:53.918823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:42.571 [2024-12-07 00:59:53.918871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:42.571 [2024-12-07 00:59:53.918904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x678900 (9): Bad file descriptor 00:32:42.571 [2024-12-07 00:59:53.929523] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:42.571 Running I/O for 1 seconds... 00:32:42.571 8410.00 IOPS, 32.85 MiB/s 00:32:42.571 Latency(us) 00:32:42.571 [2024-12-06T23:59:58.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.571 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:42.571 Verification LBA range: start 0x0 length 0x4000 00:32:42.571 NVMe0n1 : 1.01 8499.37 33.20 0.00 0.00 14965.33 1553.45 17476.27 00:32:42.571 [2024-12-06T23:59:58.722Z] =================================================================================================================== 00:32:42.571 [2024-12-06T23:59:58.722Z] Total : 8499.37 33.20 0.00 0.00 14965.33 1553.45 17476.27 00:32:42.571 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.571 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:42.571 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:42.830 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.830 00:59:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:43.088 00:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:43.656 00:59:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 373371 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 373371 ']' 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 373371 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:46.940 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373371 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373371' 00:32:46.941 killing process with pid 373371 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 373371 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 373371 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:46.941 01:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.197 rmmod nvme_tcp 00:32:47.197 rmmod nvme_fabrics 00:32:47.197 rmmod nvme_keyring 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 371225 ']' 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 371225 00:32:47.197 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 371225 ']' 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 371225 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371225 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371225' 00:32:47.454 killing process with pid 371225 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 371225 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 371225 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.454 01:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.996 00:32:49.996 real 0m35.284s 00:32:49.996 user 2m4.019s 00:32:49.996 sys 0m6.092s 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:49.996 ************************************ 00:32:49.996 END TEST nvmf_failover 00:32:49.996 ************************************ 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.996 ************************************ 00:32:49.996 START TEST nvmf_host_discovery 00:32:49.996 ************************************ 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:49.996 * Looking for test storage... 00:32:49.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:49.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.996 --rc genhtml_branch_coverage=1 00:32:49.996 --rc genhtml_function_coverage=1 00:32:49.996 --rc genhtml_legend=1 00:32:49.996 --rc geninfo_all_blocks=1 00:32:49.996 --rc geninfo_unexecuted_blocks=1 00:32:49.996 00:32:49.996 ' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:49.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.996 --rc genhtml_branch_coverage=1 00:32:49.996 --rc genhtml_function_coverage=1 00:32:49.996 --rc genhtml_legend=1 00:32:49.996 --rc geninfo_all_blocks=1 00:32:49.996 --rc geninfo_unexecuted_blocks=1 00:32:49.996 00:32:49.996 ' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:49.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.996 --rc genhtml_branch_coverage=1 00:32:49.996 --rc genhtml_function_coverage=1 00:32:49.996 --rc genhtml_legend=1 00:32:49.996 --rc geninfo_all_blocks=1 00:32:49.996 --rc geninfo_unexecuted_blocks=1 00:32:49.996 00:32:49.996 ' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:49.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.996 --rc genhtml_branch_coverage=1 00:32:49.996 --rc genhtml_function_coverage=1 00:32:49.996 --rc genhtml_legend=1 00:32:49.996 --rc geninfo_all_blocks=1 00:32:49.996 --rc geninfo_unexecuted_blocks=1 00:32:49.996 00:32:49.996 ' 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.996 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.997 01:00:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:51.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:51.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.902 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:51.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:51.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:51.903 01:00:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:51.903 00:32:51.903 --- 10.0.0.2 ping statistics --- 00:32:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.903 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:32:51.903 00:32:51.903 --- 10.0.0.1 ping statistics --- 00:32:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.903 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.903 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=377239 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 377239 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 377239 ']' 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.162 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.162 [2024-12-07 01:00:08.125076] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:32:52.162 [2024-12-07 01:00:08.125158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.162 [2024-12-07 01:00:08.200017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.162 [2024-12-07 01:00:08.247208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.162 [2024-12-07 01:00:08.247266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.162 [2024-12-07 01:00:08.247293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.162 [2024-12-07 01:00:08.247305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.162 [2024-12-07 01:00:08.247314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.162 [2024-12-07 01:00:08.247937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 [2024-12-07 01:00:08.392227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 [2024-12-07 01:00:08.400454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 null0 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 null1 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=377282 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 377282 /tmp/host.sock 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 377282 ']' 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:52.421 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.421 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.421 [2024-12-07 01:00:08.475181] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:32:52.421 [2024-12-07 01:00:08.475253] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377282 ] 00:32:52.421 [2024-12-07 01:00:08.543235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.680 [2024-12-07 01:00:08.590770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.680 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.681 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.681 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.681 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:52.939 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 [2024-12-07 01:00:08.977966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:52.940 01:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:53.198 01:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:53.762 [2024-12-07 01:00:09.779692] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:53.762 [2024-12-07 01:00:09.779717] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:53.762 [2024-12-07 01:00:09.779739] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:53.762 [2024-12-07 01:00:09.908154] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:54.019 [2024-12-07 01:00:10.130421] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:54.019 [2024-12-07 01:00:10.131555] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf5b0e0:1 started. 00:32:54.019 [2024-12-07 01:00:10.133381] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:54.019 [2024-12-07 01:00:10.133417] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:54.019 [2024-12-07 01:00:10.138675] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf5b0e0 was disconnected and freed. delete nvme_qpair. 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:54.019 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.277 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:54.278 [2024-12-07 01:00:10.312812] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf29680:1 started. 00:32:54.278 [2024-12-07 01:00:10.318907] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf29680 was disconnected and freed. delete nvme_qpair. 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 [2024-12-07 01:00:10.393984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:54.278 [2024-12-07 01:00:10.394489] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:54.278 [2024-12-07 01:00:10.394517] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:54.278 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:54.536 [2024-12-07 01:00:10.520343] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:54.536 01:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:54.796 [2024-12-07 01:00:10.784795] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:54.796 [2024-12-07 01:00:10.784840] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:54.796 [2024-12-07 01:00:10.784855] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:54.796 [2024-12-07 01:00:10.784862] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:55.736 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.737 [2024-12-07 01:00:11.618164] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:55.737 [2024-12-07 01:00:11.618207] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:55.737 [2024-12-07 01:00:11.624355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.737 [2024-12-07 01:00:11.624406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.737 [2024-12-07 01:00:11.624424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.737 [2024-12-07 01:00:11.624437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.737 [2024-12-07 01:00:11.624467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.737 [2024-12-07 01:00:11.624479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.737 [2024-12-07 01:00:11.624493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.737 [2024-12-07 01:00:11.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.737 [2024-12-07 01:00:11.624534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:55.737 [2024-12-07 01:00:11.634345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.737 [2024-12-07 01:00:11.644383] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.737 [2024-12-07 01:00:11.644405] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.737 [2024-12-07 01:00:11.644418] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.644427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.737 [2024-12-07 01:00:11.644475] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.644655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.737 [2024-12-07 01:00:11.644685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.737 [2024-12-07 01:00:11.644702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.737 [2024-12-07 01:00:11.644725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.737 [2024-12-07 01:00:11.644747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.737 [2024-12-07 01:00:11.644762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.737 [2024-12-07 01:00:11.644779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.737 [2024-12-07 01:00:11.644792] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.737 [2024-12-07 01:00:11.644801] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.737 [2024-12-07 01:00:11.644809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.737 [2024-12-07 01:00:11.654523] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.737 [2024-12-07 01:00:11.654543] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.737 [2024-12-07 01:00:11.654552] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.654559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.737 [2024-12-07 01:00:11.654582] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.654779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.737 [2024-12-07 01:00:11.654806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.737 [2024-12-07 01:00:11.654822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.737 [2024-12-07 01:00:11.654844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.737 [2024-12-07 01:00:11.654873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.737 [2024-12-07 01:00:11.654888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.737 [2024-12-07 01:00:11.654901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.737 [2024-12-07 01:00:11.654914] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.737 [2024-12-07 01:00:11.654922] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.737 [2024-12-07 01:00:11.654930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.737 [2024-12-07 01:00:11.664617] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.737 [2024-12-07 01:00:11.664640] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.737 [2024-12-07 01:00:11.664649] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.664656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.737 [2024-12-07 01:00:11.664681] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.737 [2024-12-07 01:00:11.664859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.737 [2024-12-07 01:00:11.664888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.737 [2024-12-07 01:00:11.664906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.737 [2024-12-07 01:00:11.664929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.737 [2024-12-07 01:00:11.664950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.737 [2024-12-07 01:00:11.664964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.737 [2024-12-07 01:00:11.664984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.737 [2024-12-07 01:00:11.665006] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.737 [2024-12-07 01:00:11.665018] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.737 [2024-12-07 01:00:11.665025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.737 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:55.738 [2024-12-07 01:00:11.674790] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.738 [2024-12-07 01:00:11.674813] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.738 [2024-12-07 01:00:11.674822] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.674830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.738 [2024-12-07 01:00:11.674855] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.675030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.738 [2024-12-07 01:00:11.675066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.738 [2024-12-07 01:00:11.675083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.738 [2024-12-07 01:00:11.675107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.738 [2024-12-07 01:00:11.675128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.738 [2024-12-07 01:00:11.675142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.738 [2024-12-07 01:00:11.675156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.738 [2024-12-07 01:00:11.675170] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.738 [2024-12-07 01:00:11.675179] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.738 [2024-12-07 01:00:11.675187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.738 [2024-12-07 01:00:11.684889] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.738 [2024-12-07 01:00:11.684912] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.738 [2024-12-07 01:00:11.684921] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.684929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.738 [2024-12-07 01:00:11.684956] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.685128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.738 [2024-12-07 01:00:11.685157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.738 [2024-12-07 01:00:11.685174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.738 [2024-12-07 01:00:11.685197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.738 [2024-12-07 01:00:11.685217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.738 [2024-12-07 01:00:11.685232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.738 [2024-12-07 01:00:11.685255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.738 [2024-12-07 01:00:11.685269] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.738 [2024-12-07 01:00:11.685278] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.738 [2024-12-07 01:00:11.685285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.738 [2024-12-07 01:00:11.695006] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:55.738 [2024-12-07 01:00:11.695028] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:55.738 [2024-12-07 01:00:11.695038] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.695045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:55.738 [2024-12-07 01:00:11.695071] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:55.738 [2024-12-07 01:00:11.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.738 [2024-12-07 01:00:11.695225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2d220 with addr=10.0.0.2, port=4420 00:32:55.738 [2024-12-07 01:00:11.695242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d220 is same with the state(6) to be set 00:32:55.738 [2024-12-07 01:00:11.695264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d220 (9): Bad file descriptor 00:32:55.738 [2024-12-07 01:00:11.695285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:55.738 [2024-12-07 01:00:11.695300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:55.738 [2024-12-07 01:00:11.695314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:55.738 [2024-12-07 01:00:11.695326] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:55.738 [2024-12-07 01:00:11.695335] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:55.738 [2024-12-07 01:00:11.695343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.738 [2024-12-07 01:00:11.704879] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:55.738 [2024-12-07 01:00:11.704906] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:55.738 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:55.739 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.000 01:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.937 [2024-12-07 01:00:12.998636] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:56.937 [2024-12-07 01:00:12.998662] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:56.937 [2024-12-07 01:00:12.998684] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:56.937 [2024-12-07 01:00:13.084937] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:57.507 [2024-12-07 01:00:13.392501] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:57.507 [2024-12-07 01:00:13.393452] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xf42890:1 started. 00:32:57.507 [2024-12-07 01:00:13.395519] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:57.507 [2024-12-07 01:00:13.395560] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:57.507 [2024-12-07 01:00:13.397586] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xf42890 was disconnected and freed. delete nvme_qpair. 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.507 request: 00:32:57.507 { 00:32:57.507 "name": "nvme", 00:32:57.507 "trtype": "tcp", 00:32:57.507 "traddr": "10.0.0.2", 00:32:57.507 "adrfam": "ipv4", 00:32:57.507 "trsvcid": "8009", 00:32:57.507 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:57.507 "wait_for_attach": true, 00:32:57.507 "method": "bdev_nvme_start_discovery", 00:32:57.507 "req_id": 1 00:32:57.507 } 00:32:57.507 Got JSON-RPC error response 00:32:57.507 response: 00:32:57.507 { 00:32:57.507 "code": -17, 00:32:57.507 "message": "File exists" 00:32:57.507 } 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:57.507 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.508 request: 00:32:57.508 { 00:32:57.508 "name": "nvme_second", 00:32:57.508 "trtype": "tcp", 00:32:57.508 "traddr": "10.0.0.2", 00:32:57.508 "adrfam": "ipv4", 00:32:57.508 "trsvcid": "8009", 00:32:57.508 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:57.508 "wait_for_attach": true, 00:32:57.508 "method": "bdev_nvme_start_discovery", 00:32:57.508 "req_id": 1 00:32:57.508 } 00:32:57.508 Got JSON-RPC error response 00:32:57.508 response: 00:32:57.508 { 00:32:57.508 "code": -17, 00:32:57.508 "message": "File exists" 00:32:57.508 } 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.508 01:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.445 [2024-12-07 01:00:14.594895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.446 [2024-12-07 01:00:14.594952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2ae20 with addr=10.0.0.2, port=8010 00:32:58.446 [2024-12-07 01:00:14.594985] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:58.446 [2024-12-07 01:00:14.595012] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:58.446 [2024-12-07 01:00:14.595027] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:59.820 [2024-12-07 01:00:15.597288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.820 [2024-12-07 01:00:15.597323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2ae20 with addr=10.0.0.2, port=8010 00:32:59.820 [2024-12-07 01:00:15.597351] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:59.820 [2024-12-07 01:00:15.597365] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:59.820 [2024-12-07 01:00:15.597377] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:00.757 [2024-12-07 01:00:16.599568] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:00.757 request: 00:33:00.757 { 00:33:00.757 "name": "nvme_second", 00:33:00.757 "trtype": "tcp", 00:33:00.757 "traddr": "10.0.0.2", 00:33:00.757 "adrfam": "ipv4", 00:33:00.757 "trsvcid": "8010", 00:33:00.757 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:00.757 "wait_for_attach": false, 00:33:00.757 "attach_timeout_ms": 3000, 00:33:00.757 "method": "bdev_nvme_start_discovery", 00:33:00.757 "req_id": 1 00:33:00.757 } 00:33:00.757 Got JSON-RPC error response 00:33:00.757 response: 00:33:00.757 { 00:33:00.757 "code": -110, 00:33:00.757 "message": "Connection timed out" 00:33:00.757 } 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 377282 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:00.757 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:00.758 rmmod nvme_tcp 00:33:00.758 rmmod nvme_fabrics 00:33:00.758 rmmod nvme_keyring 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 377239 ']' 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 377239 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 377239 ']' 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 377239 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 377239 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 377239' 00:33:00.758 killing process with pid 377239 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 377239 00:33:00.758 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 377239 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.019 01:00:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.925 01:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.925 00:33:02.925 real 0m13.316s 00:33:02.925 user 0m19.154s 00:33:02.925 sys 0m2.800s 00:33:02.925 01:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.925 01:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.925 ************************************ 00:33:02.925 END TEST nvmf_host_discovery 00:33:02.925 ************************************ 00:33:02.925 01:00:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:02.925 01:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:02.925 01:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.925 01:00:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.925 ************************************ 00:33:02.925 START TEST nvmf_host_multipath_status 00:33:02.925 ************************************ 00:33:02.926 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:03.183 * Looking for test storage... 00:33:03.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.183 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:03.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.183 --rc genhtml_branch_coverage=1 00:33:03.183 --rc genhtml_function_coverage=1 00:33:03.183 --rc genhtml_legend=1 00:33:03.183 --rc geninfo_all_blocks=1 00:33:03.183 --rc geninfo_unexecuted_blocks=1 00:33:03.183 00:33:03.183 ' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.184 --rc genhtml_branch_coverage=1 00:33:03.184 --rc genhtml_function_coverage=1 00:33:03.184 --rc genhtml_legend=1 00:33:03.184 --rc geninfo_all_blocks=1 00:33:03.184 --rc geninfo_unexecuted_blocks=1 00:33:03.184 00:33:03.184 ' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.184 --rc genhtml_branch_coverage=1 00:33:03.184 --rc genhtml_function_coverage=1 00:33:03.184 --rc genhtml_legend=1 00:33:03.184 --rc geninfo_all_blocks=1 00:33:03.184 --rc geninfo_unexecuted_blocks=1 00:33:03.184 00:33:03.184 ' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:03.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.184 --rc genhtml_branch_coverage=1 00:33:03.184 --rc genhtml_function_coverage=1 00:33:03.184 --rc genhtml_legend=1 00:33:03.184 --rc geninfo_all_blocks=1 00:33:03.184 --rc geninfo_unexecuted_blocks=1 00:33:03.184 00:33:03.184 ' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:03.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:03.184 01:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:05.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:05.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.719 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:05.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:05.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:33:05.720 00:33:05.720 --- 10.0.0.2 ping statistics --- 00:33:05.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.720 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:33:05.720 00:33:05.720 --- 10.0.0.1 ping statistics --- 00:33:05.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.720 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=380564 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 380564 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 380564 ']' 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.720 [2024-12-07 01:00:21.613732] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:33:05.720 [2024-12-07 01:00:21.613819] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.720 [2024-12-07 01:00:21.688488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:05.720 [2024-12-07 01:00:21.737492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.720 [2024-12-07 01:00:21.737564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.720 [2024-12-07 01:00:21.737578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.720 [2024-12-07 01:00:21.737590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.720 [2024-12-07 01:00:21.737599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.720 [2024-12-07 01:00:21.743016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.720 [2024-12-07 01:00:21.743026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.720 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.980 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.980 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=380564 00:33:05.980 01:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:06.239 [2024-12-07 01:00:22.134890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.239 01:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:06.497 Malloc0 00:33:06.497 01:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:06.755 01:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:07.014 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.273 [2024-12-07 01:00:23.349169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.273 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:07.531 [2024-12-07 01:00:23.609859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=380728 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 380728 /var/tmp/bdevperf.sock 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 380728 ']' 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:07.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.531 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:07.790 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.790 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:07.790 01:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:08.048 01:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:08.618 Nvme0n1 00:33:08.618 01:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:09.186 Nvme0n1 00:33:09.186 01:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:09.186 01:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:11.089 01:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:11.089 01:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:11.347 01:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:11.605 01:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.983 01:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.241 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.241 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.241 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.241 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.499 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.499 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.499 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.499 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.757 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.757 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:13.757 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.757 01:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.015 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.015 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.015 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.015 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.273 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.273 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:14.273 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:14.531 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:14.789 01:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:16.173 01:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:16.173 01:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:16.173 01:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.173 01:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.173 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.173 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.173 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.173 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.431 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.431 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.431 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.431 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:16.689 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.689 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:16.690 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.690 01:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.949 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.949 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.949 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.949 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.207 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.207 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.207 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.207 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.465 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.465 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:17.465 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:17.723 01:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:18.290 01:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:19.223 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:19.223 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.223 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.223 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.480 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.480 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.480 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.480 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.737 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.737 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.737 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.737 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.993 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.993 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.994 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.994 01:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.250 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.250 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.250 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.250 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.507 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.507 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:20.507 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.507 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.764 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.764 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:20.764 01:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.022 01:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:21.279 01:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.650 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:22.907 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.907 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:22.907 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.907 01:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.163 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.163 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.163 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.163 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.419 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.419 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:23.420 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.420 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.677 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.677 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:23.677 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.677 01:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:23.935 01:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.935 01:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:23.935 01:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:24.193 01:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:24.452 01:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.827 01:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.085 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.085 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.085 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.085 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.343 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.343 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.343 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.343 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.601 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.601 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:26.601 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.601 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.860 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.860 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:26.860 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.860 01:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.119 01:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.119 01:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:27.119 01:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:27.378 01:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:27.637 01:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:29.014 01:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:29.014 01:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:29.014 01:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.014 01:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.014 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.014 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:29.014 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.014 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.271 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.271 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.272 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.272 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.529 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.529 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.529 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.529 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.787 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.787 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:29.787 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.787 01:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.045 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:30.045 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.045 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.045 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.304 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.304 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:30.562 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:30.562 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:30.822 01:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:31.391 01:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:32.332 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:32.332 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:32.332 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.332 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.590 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.590 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.590 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.590 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.849 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.849 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.849 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.849 01:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.107 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.107 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.107 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.107 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.365 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.365 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.365 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.365 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.624 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.624 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.624 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.624 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.882 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.882 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:33.882 01:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.141 01:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:34.399 01:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:35.336 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:35.336 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:35.336 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.336 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.595 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.595 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:35.595 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.595 01:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.165 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.423 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.423 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.423 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.423 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.993 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.993 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.993 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.993 01:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.993 01:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.993 01:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:36.993 01:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.251 01:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:37.512 01:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.892 01:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.155 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.155 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.155 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.155 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.415 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.415 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.415 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.415 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.672 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.672 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.672 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.672 01:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.930 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.930 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:39.930 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.930 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.188 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.188 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:40.188 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:40.758 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:40.758 01:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:42.134 01:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:42.134 01:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:42.134 01:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.134 01:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.134 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.134 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:42.134 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.134 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.390 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.390 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.390 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.390 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.647 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.647 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.647 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.647 01:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:42.904 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.905 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:42.905 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.905 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.162 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.162 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:43.162 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.162 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.419 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.419 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 380728 00:33:43.419 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 380728 ']' 00:33:43.419 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 380728 00:33:43.419 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380728 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380728' 00:33:43.690 killing process with pid 380728 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 380728 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 380728 00:33:43.690 { 00:33:43.690 "results": [ 00:33:43.690 { 00:33:43.690 "job": "Nvme0n1", 00:33:43.690 "core_mask": "0x4", 00:33:43.690 "workload": "verify", 00:33:43.690 "status": "terminated", 00:33:43.690 "verify_range": { 00:33:43.690 "start": 0, 00:33:43.690 "length": 16384 00:33:43.690 }, 00:33:43.690 "queue_depth": 128, 00:33:43.690 "io_size": 4096, 00:33:43.690 "runtime": 34.301798, 00:33:43.690 "iops": 8002.2627385304995, 00:33:43.690 "mibps": 31.258838822384764, 00:33:43.690 "io_failed": 0, 00:33:43.690 "io_timeout": 0, 00:33:43.690 "avg_latency_us": 15966.481313931565, 00:33:43.690 "min_latency_us": 385.3274074074074, 00:33:43.690 "max_latency_us": 4101097.2444444443 00:33:43.690 } 00:33:43.690 ], 00:33:43.690 "core_count": 1 00:33:43.690 } 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 380728 00:33:43.690 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:43.690 [2024-12-07 01:00:23.673454] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:33:43.690 [2024-12-07 01:00:23.673567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380728 ] 00:33:43.690 [2024-12-07 01:00:23.747955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.690 [2024-12-07 01:00:23.797489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.690 Running I/O for 90 seconds... 00:33:43.690 8314.00 IOPS, 32.48 MiB/s [2024-12-07T00:00:59.841Z] 8381.50 IOPS, 32.74 MiB/s [2024-12-07T00:00:59.841Z] 8391.67 IOPS, 32.78 MiB/s [2024-12-07T00:00:59.841Z] 8418.75 IOPS, 32.89 MiB/s [2024-12-07T00:00:59.841Z] 8440.80 IOPS, 32.97 MiB/s [2024-12-07T00:00:59.841Z] 8425.67 IOPS, 32.91 MiB/s [2024-12-07T00:00:59.841Z] 8453.00 IOPS, 33.02 MiB/s [2024-12-07T00:00:59.841Z] 8458.50 IOPS, 33.04 MiB/s [2024-12-07T00:00:59.841Z] 8464.33 IOPS, 33.06 MiB/s [2024-12-07T00:00:59.841Z] 8466.60 IOPS, 33.07 MiB/s [2024-12-07T00:00:59.841Z] 8471.00 IOPS, 33.09 MiB/s [2024-12-07T00:00:59.841Z] 8461.25 IOPS, 33.05 MiB/s [2024-12-07T00:00:59.841Z] 8477.85 IOPS, 33.12 MiB/s [2024-12-07T00:00:59.841Z] 8473.29 IOPS, 33.10 MiB/s [2024-12-07T00:00:59.841Z] [2024-12-07 01:00:40.268715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.690 [2024-12-07 01:00:40.268776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.268837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.268857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.268898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.268917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.268940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.268959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.268981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.269007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.269033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.269052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.269076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.269093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.269116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.269133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.270290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.270316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.270342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.690 [2024-12-07 01:00:40.270371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.690 [2024-12-07 01:00:40.270395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.270436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.270476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.270515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.270555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.270596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.270613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.271970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.271988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.691 [2024-12-07 01:00:40.272506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.691 [2024-12-07 01:00:40.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.272966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.272982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.692 [2024-12-07 01:00:40.273152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.273437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.273977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.692 [2024-12-07 01:00:40.274328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.692 [2024-12-07 01:00:40.274344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.274966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.274984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.693 [2024-12-07 01:00:40.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.693 [2024-12-07 01:00:40.275512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.275964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.275981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.276618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.276640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.694 [2024-12-07 01:00:40.276657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.694 [2024-12-07 01:00:40.277642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.694 [2024-12-07 01:00:40.277660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.277963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.277981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.695 [2024-12-07 01:00:40.278595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.695 [2024-12-07 01:00:40.278612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.278951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.278968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.696 [2024-12-07 01:00:40.279755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.696 [2024-12-07 01:00:40.279871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.696 [2024-12-07 01:00:40.279893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.279910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.279932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.279948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.279991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.280019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.282953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.282978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.697 [2024-12-07 01:00:40.283508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.697 [2024-12-07 01:00:40.283525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.283958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.283975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.284425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.284455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.698 [2024-12-07 01:00:40.294685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.698 [2024-12-07 01:00:40.294701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.294723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.294739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.294761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.294778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.294800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.294823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.294861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.294878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.699 8465.53 IOPS, 33.07 MiB/s [2024-12-07T00:00:59.850Z] [2024-12-07 01:00:40.295754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.699 [2024-12-07 01:00:40.295780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.295808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.295828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.295851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.295868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.295891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.295908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.295931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.295949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.295971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.295989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.699 [2024-12-07 01:00:40.296788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.699 [2024-12-07 01:00:40.296811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.296829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.296851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.296868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.296907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.296924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.296946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.296963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.297941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.297963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.298033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.298075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.298092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.298116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.298133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.700 [2024-12-07 01:00:40.298156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.700 [2024-12-07 01:00:40.298173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.701 [2024-12-07 01:00:40.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.298254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.298309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.298368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.298408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.298429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.298446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.701 [2024-12-07 01:00:40.299924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.701 [2024-12-07 01:00:40.299942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.299965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.299986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.300958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.300975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.702 [2024-12-07 01:00:40.301341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.702 [2024-12-07 01:00:40.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.301947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.301962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.703 [2024-12-07 01:00:40.302036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.302285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.302302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.703 [2024-12-07 01:00:40.303490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.703 [2024-12-07 01:00:40.303507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.303948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.303987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.704 [2024-12-07 01:00:40.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.704 [2024-12-07 01:00:40.304783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.304799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.304820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.304836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.304858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.304895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.304912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.304933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.304949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.304984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.705 [2024-12-07 01:00:40.305270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.305414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.305430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.705 [2024-12-07 01:00:40.306658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.705 [2024-12-07 01:00:40.306676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.306945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.306961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.307963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.307986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.706 [2024-12-07 01:00:40.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.706 [2024-12-07 01:00:40.308036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.707 [2024-12-07 01:00:40.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.308969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.707 [2024-12-07 01:00:40.308991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.707 [2024-12-07 01:00:40.309018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.309042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.309060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.309101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.309934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.309958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.309986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.310964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.310987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.311016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.311043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.311083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.311101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.708 [2024-12-07 01:00:40.311123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.708 [2024-12-07 01:00:40.311140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.311971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.311987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.709 [2024-12-07 01:00:40.312076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.312958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.709 [2024-12-07 01:00:40.312981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.709 [2024-12-07 01:00:40.313005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.313966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.313989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.710 [2024-12-07 01:00:40.314338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.710 [2024-12-07 01:00:40.314355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.314952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.314978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.711 [2024-12-07 01:00:40.315634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.711 [2024-12-07 01:00:40.315656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.711 [2024-12-07 01:00:40.315673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.315696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.315719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.315742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.315759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.315781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.315798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.315822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.315839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.316967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.316984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.712 [2024-12-07 01:00:40.317556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.712 [2024-12-07 01:00:40.317573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.317971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.317993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.713 [2024-12-07 01:00:40.318815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.713 [2024-12-07 01:00:40.318836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.318860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.714 [2024-12-07 01:00:40.318877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.318900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.318917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.318939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.318956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.319963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.319986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:43.714 [2024-12-07 01:00:40.320719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.714 [2024-12-07 01:00:40.320736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.320959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.320976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.321370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.321387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.327768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.327800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.327827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.327851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.327875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.327893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.327917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.327934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.327957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.327974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.715 [2024-12-07 01:00:40.328368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.715 [2024-12-07 01:00:40.328393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.716 [2024-12-07 01:00:40.328847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.328956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.328973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.329961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.329978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.330016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.716 [2024-12-07 01:00:40.330036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.716 [2024-12-07 01:00:40.330063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.330955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.330982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.717 [2024-12-07 01:00:40.331503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.717 [2024-12-07 01:00:40.331531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.718 [2024-12-07 01:00:40.331779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.331824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:40.331974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:40.332003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.718 7936.44 IOPS, 31.00 MiB/s [2024-12-07T00:00:59.869Z] 7469.59 IOPS, 29.18 MiB/s [2024-12-07T00:00:59.869Z] 7054.61 IOPS, 27.56 MiB/s [2024-12-07T00:00:59.869Z] 6683.32 IOPS, 26.11 MiB/s [2024-12-07T00:00:59.869Z] 6753.80 IOPS, 26.38 MiB/s [2024-12-07T00:00:59.869Z] 6830.71 IOPS, 26.68 MiB/s [2024-12-07T00:00:59.869Z] 6950.55 IOPS, 27.15 MiB/s [2024-12-07T00:00:59.869Z] 7141.83 IOPS, 27.90 MiB/s [2024-12-07T00:00:59.869Z] 7309.58 IOPS, 28.55 MiB/s [2024-12-07T00:00:59.869Z] 7462.68 IOPS, 29.15 MiB/s [2024-12-07T00:00:59.869Z] 7498.96 IOPS, 29.29 MiB/s [2024-12-07T00:00:59.869Z] 7539.22 IOPS, 29.45 MiB/s [2024-12-07T00:00:59.869Z] 7575.50 IOPS, 29.59 MiB/s [2024-12-07T00:00:59.869Z] 7669.79 IOPS, 29.96 MiB/s [2024-12-07T00:00:59.869Z] 7798.17 IOPS, 30.46 MiB/s [2024-12-07T00:00:59.869Z] 7909.71 IOPS, 30.90 MiB/s [2024-12-07T00:00:59.869Z] [2024-12-07 01:00:56.876665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.718 [2024-12-07 01:00:56.876737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.876826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.718 [2024-12-07 01:00:56.876858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.876885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.876903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.876926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.876943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.718 [2024-12-07 01:00:56.877357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.718 [2024-12-07 01:00:56.877401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.877426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.718 [2024-12-07 01:00:56.877443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:43.718 [2024-12-07 01:00:56.880302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.719 [2024-12-07 01:00:56.880685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.880956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.880974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.881044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.881066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.881107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.881124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.881147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:43.719 [2024-12-07 01:00:56.881187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.719 [2024-12-07 01:00:56.881205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:43.719 7970.19 IOPS, 31.13 MiB/s [2024-12-07T00:00:59.870Z] 7985.79 IOPS, 31.19 MiB/s [2024-12-07T00:00:59.870Z] 8001.32 IOPS, 31.26 MiB/s [2024-12-07T00:00:59.870Z] Received shutdown signal, test time was about 34.302608 seconds 00:33:43.719 00:33:43.719 Latency(us) 00:33:43.719 [2024-12-07T00:00:59.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.719 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:43.719 Verification LBA range: start 0x0 length 0x4000 00:33:43.719 Nvme0n1 : 34.30 8002.26 31.26 0.00 0.00 15966.48 385.33 4101097.24 00:33:43.719 [2024-12-07T00:00:59.870Z] =================================================================================================================== 00:33:43.719 [2024-12-07T00:00:59.870Z] Total : 8002.26 31.26 0.00 0.00 15966.48 385.33 4101097.24 00:33:43.719 01:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.978 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.978 rmmod nvme_tcp 00:33:43.978 rmmod nvme_fabrics 00:33:43.978 rmmod nvme_keyring 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 380564 ']' 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 380564 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 380564 ']' 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 380564 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380564 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380564' 00:33:44.237 killing process with pid 380564 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 380564 00:33:44.237 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 380564 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.497 01:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.400 01:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.400 00:33:46.400 real 0m43.408s 00:33:46.400 user 2m12.238s 00:33:46.400 sys 0m10.505s 00:33:46.400 01:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.400 01:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:46.400 ************************************ 00:33:46.400 END TEST nvmf_host_multipath_status 00:33:46.400 ************************************ 00:33:46.400 01:01:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:46.400 01:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:46.401 01:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.401 01:01:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.401 ************************************ 00:33:46.401 START TEST nvmf_discovery_remove_ifc 00:33:46.401 ************************************ 00:33:46.401 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:46.661 * Looking for test storage... 00:33:46.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.661 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:46.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.662 --rc genhtml_branch_coverage=1 00:33:46.662 --rc genhtml_function_coverage=1 00:33:46.662 --rc genhtml_legend=1 00:33:46.662 --rc geninfo_all_blocks=1 00:33:46.662 --rc geninfo_unexecuted_blocks=1 00:33:46.662 00:33:46.662 ' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:46.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.662 --rc genhtml_branch_coverage=1 00:33:46.662 --rc genhtml_function_coverage=1 00:33:46.662 --rc genhtml_legend=1 00:33:46.662 --rc geninfo_all_blocks=1 00:33:46.662 --rc geninfo_unexecuted_blocks=1 00:33:46.662 00:33:46.662 ' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:46.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.662 --rc genhtml_branch_coverage=1 00:33:46.662 --rc genhtml_function_coverage=1 00:33:46.662 --rc genhtml_legend=1 00:33:46.662 --rc geninfo_all_blocks=1 00:33:46.662 --rc geninfo_unexecuted_blocks=1 00:33:46.662 00:33:46.662 ' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:46.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.662 --rc genhtml_branch_coverage=1 00:33:46.662 --rc genhtml_function_coverage=1 00:33:46.662 --rc genhtml_legend=1 00:33:46.662 --rc geninfo_all_blocks=1 00:33:46.662 --rc geninfo_unexecuted_blocks=1 00:33:46.662 00:33:46.662 ' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.662 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.663 01:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:49.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:49.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:49.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:49.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.212 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.213 01:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:33:49.213 00:33:49.213 --- 10.0.0.2 ping statistics --- 00:33:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.213 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:33:49.213 00:33:49.213 --- 10.0.0.1 ping statistics --- 00:33:49.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.213 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=387181 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 387181 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 387181 ']' 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.213 [2024-12-07 01:01:05.100708] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:33:49.213 [2024-12-07 01:01:05.100795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.213 [2024-12-07 01:01:05.172408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.213 [2024-12-07 01:01:05.214781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.213 [2024-12-07 01:01:05.214853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.213 [2024-12-07 01:01:05.214877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.213 [2024-12-07 01:01:05.214887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.213 [2024-12-07 01:01:05.214896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.213 [2024-12-07 01:01:05.215533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.213 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.474 [2024-12-07 01:01:05.359684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.474 [2024-12-07 01:01:05.367896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:49.474 null0 00:33:49.474 [2024-12-07 01:01:05.399825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.474 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=387204 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 387204 /tmp/host.sock 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 387204 ']' 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:49.475 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.475 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.475 [2024-12-07 01:01:05.474665] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:33:49.475 [2024-12-07 01:01:05.474752] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387204 ] 00:33:49.475 [2024-12-07 01:01:05.547921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.475 [2024-12-07 01:01:05.595113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.732 01:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.113 [2024-12-07 01:01:06.878177] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:51.113 [2024-12-07 01:01:06.878215] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:51.113 [2024-12-07 01:01:06.878238] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.113 [2024-12-07 01:01:06.964523] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:51.113 [2024-12-07 01:01:07.106511] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:51.113 [2024-12-07 01:01:07.107625] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18fac90:1 started. 00:33:51.114 [2024-12-07 01:01:07.109370] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:51.114 [2024-12-07 01:01:07.109437] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:51.114 [2024-12-07 01:01:07.109473] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:51.114 [2024-12-07 01:01:07.109499] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:51.114 [2024-12-07 01:01:07.109533] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.114 [2024-12-07 01:01:07.115856] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18fac90 was disconnected and freed. delete nvme_qpair. 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:51.114 01:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:52.495 01:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:53.435 01:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:54.374 01:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:55.314 01:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:56.693 01:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:56.693 [2024-12-07 01:01:12.550882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:56.693 [2024-12-07 01:01:12.550953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.693 [2024-12-07 01:01:12.550989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.693 [2024-12-07 01:01:12.551016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.693 [2024-12-07 01:01:12.551031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.693 [2024-12-07 01:01:12.551043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.693 [2024-12-07 01:01:12.551056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.693 [2024-12-07 01:01:12.551069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.693 [2024-12-07 01:01:12.551082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.693 [2024-12-07 01:01:12.551095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.693 [2024-12-07 01:01:12.551108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.693 [2024-12-07 01:01:12.551121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7510 is same with the state(6) to be set 00:33:56.693 [2024-12-07 01:01:12.560900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d7510 (9): Bad file descriptor 00:33:56.693 [2024-12-07 01:01:12.570940] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:56.693 [2024-12-07 01:01:12.570960] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:56.693 [2024-12-07 01:01:12.570988] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:56.693 [2024-12-07 01:01:12.571010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:56.693 [2024-12-07 01:01:12.571059] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.629 [2024-12-07 01:01:13.625062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:57.629 [2024-12-07 01:01:13.625115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d7510 with addr=10.0.0.2, port=4420 00:33:57.629 [2024-12-07 01:01:13.625136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7510 is same with the state(6) to be set 00:33:57.629 [2024-12-07 01:01:13.625182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d7510 (9): Bad file descriptor 00:33:57.629 [2024-12-07 01:01:13.625556] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:57.629 [2024-12-07 01:01:13.625590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:57.629 [2024-12-07 01:01:13.625605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:57.629 [2024-12-07 01:01:13.625622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:57.629 [2024-12-07 01:01:13.625633] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:57.629 [2024-12-07 01:01:13.625642] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:57.629 [2024-12-07 01:01:13.625650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:57.629 [2024-12-07 01:01:13.625662] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:57.629 [2024-12-07 01:01:13.625670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:57.629 01:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:58.568 [2024-12-07 01:01:14.628161] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:58.568 [2024-12-07 01:01:14.628212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:58.568 [2024-12-07 01:01:14.628241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:58.568 [2024-12-07 01:01:14.628265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:58.568 [2024-12-07 01:01:14.628295] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:58.568 [2024-12-07 01:01:14.628309] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:58.568 [2024-12-07 01:01:14.628320] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:58.568 [2024-12-07 01:01:14.628328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:58.568 [2024-12-07 01:01:14.628370] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:58.568 [2024-12-07 01:01:14.628427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.568 [2024-12-07 01:01:14.628450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.568 [2024-12-07 01:01:14.628472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.568 [2024-12-07 01:01:14.628486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.568 [2024-12-07 01:01:14.628500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.568 [2024-12-07 01:01:14.628513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.568 [2024-12-07 01:01:14.628536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.568 [2024-12-07 01:01:14.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.568 [2024-12-07 01:01:14.628564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.568 [2024-12-07 01:01:14.628578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.568 [2024-12-07 01:01:14.628591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:58.568 [2024-12-07 01:01:14.628660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c6c60 (9): Bad file descriptor 00:33:58.568 [2024-12-07 01:01:14.629661] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:58.568 [2024-12-07 01:01:14.629683] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.568 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:58.829 01:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:59.775 01:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:00.711 [2024-12-07 01:01:16.683106] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:00.711 [2024-12-07 01:01:16.683133] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:00.711 [2024-12-07 01:01:16.683155] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:00.711 [2024-12-07 01:01:16.769458] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:00.711 01:01:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:00.970 [2024-12-07 01:01:16.984715] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:00.970 [2024-12-07 01:01:16.985565] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x18e1b70:1 started. 00:34:00.970 [2024-12-07 01:01:16.986943] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:00.970 [2024-12-07 01:01:16.987006] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:00.970 [2024-12-07 01:01:16.987054] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:00.970 [2024-12-07 01:01:16.987076] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:00.970 [2024-12-07 01:01:16.987087] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:00.970 [2024-12-07 01:01:16.991891] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x18e1b70 was disconnected and freed. delete nvme_qpair. 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 387204 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 387204 ']' 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 387204 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387204 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387204' 00:34:01.983 killing process with pid 387204 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 387204 00:34:01.983 01:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 387204 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.983 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.270 rmmod nvme_tcp 00:34:02.270 rmmod nvme_fabrics 00:34:02.270 rmmod nvme_keyring 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 387181 ']' 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 387181 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 387181 ']' 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 387181 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387181 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387181' 00:34:02.270 killing process with pid 387181 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 387181 00:34:02.270 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 387181 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.564 01:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.596 00:34:04.596 real 0m17.969s 00:34:04.596 user 0m25.827s 00:34:04.596 sys 0m3.196s 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.596 ************************************ 00:34:04.596 END TEST nvmf_discovery_remove_ifc 00:34:04.596 ************************************ 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.596 ************************************ 00:34:04.596 START TEST nvmf_identify_kernel_target 00:34:04.596 ************************************ 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:04.596 * Looking for test storage... 00:34:04.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.596 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:04.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.596 --rc genhtml_branch_coverage=1 00:34:04.596 --rc genhtml_function_coverage=1 00:34:04.596 --rc genhtml_legend=1 00:34:04.596 --rc geninfo_all_blocks=1 00:34:04.597 --rc geninfo_unexecuted_blocks=1 00:34:04.597 00:34:04.597 ' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.597 --rc genhtml_branch_coverage=1 00:34:04.597 --rc genhtml_function_coverage=1 00:34:04.597 --rc genhtml_legend=1 00:34:04.597 --rc geninfo_all_blocks=1 00:34:04.597 --rc geninfo_unexecuted_blocks=1 00:34:04.597 00:34:04.597 ' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.597 --rc genhtml_branch_coverage=1 00:34:04.597 --rc genhtml_function_coverage=1 00:34:04.597 --rc genhtml_legend=1 00:34:04.597 --rc geninfo_all_blocks=1 00:34:04.597 --rc geninfo_unexecuted_blocks=1 00:34:04.597 00:34:04.597 ' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.597 --rc genhtml_branch_coverage=1 00:34:04.597 --rc genhtml_function_coverage=1 00:34:04.597 --rc genhtml_legend=1 00:34:04.597 --rc geninfo_all_blocks=1 00:34:04.597 --rc geninfo_unexecuted_blocks=1 00:34:04.597 00:34:04.597 ' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.597 01:01:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:07.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:07.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:07.139 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:07.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.139 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:34:07.139 00:34:07.139 --- 10.0.0.2 ping statistics --- 00:34:07.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.140 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:34:07.140 00:34:07.140 --- 10.0.0.1 ping statistics --- 00:34:07.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.140 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:07.140 01:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:08.076 Waiting for block devices as requested 00:34:08.076 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:08.076 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:08.076 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:08.334 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:08.334 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:08.334 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:08.592 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:08.592 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:08.592 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:08.592 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:08.851 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:08.851 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:08.851 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:08.851 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:09.110 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:09.110 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:09.110 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:09.368 No valid GPT data, bailing 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:09.368 00:34:09.368 Discovery Log Number of Records 2, Generation counter 2 00:34:09.368 =====Discovery Log Entry 0====== 00:34:09.368 trtype: tcp 00:34:09.368 adrfam: ipv4 00:34:09.368 subtype: current discovery subsystem 00:34:09.368 treq: not specified, sq flow control disable supported 00:34:09.368 portid: 1 00:34:09.368 trsvcid: 4420 00:34:09.368 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:09.368 traddr: 10.0.0.1 00:34:09.368 eflags: none 00:34:09.368 sectype: none 00:34:09.368 =====Discovery Log Entry 1====== 00:34:09.368 trtype: tcp 00:34:09.368 adrfam: ipv4 00:34:09.368 subtype: nvme subsystem 00:34:09.368 treq: not specified, sq flow control disable supported 00:34:09.368 portid: 1 00:34:09.368 trsvcid: 4420 00:34:09.368 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:09.368 traddr: 10.0.0.1 00:34:09.368 eflags: none 00:34:09.368 sectype: none 00:34:09.368 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:09.368 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:09.627 ===================================================== 00:34:09.627 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:09.627 ===================================================== 00:34:09.627 Controller Capabilities/Features 00:34:09.627 ================================ 00:34:09.627 Vendor ID: 0000 00:34:09.627 Subsystem Vendor ID: 0000 00:34:09.627 Serial Number: 509caeb88f76b755db13 00:34:09.627 Model Number: Linux 00:34:09.627 Firmware Version: 6.8.9-20 00:34:09.627 Recommended Arb Burst: 0 00:34:09.627 IEEE OUI Identifier: 00 00 00 00:34:09.627 Multi-path I/O 00:34:09.627 May have multiple subsystem ports: No 00:34:09.627 May have multiple controllers: No 00:34:09.627 Associated with SR-IOV VF: No 00:34:09.627 Max Data Transfer Size: Unlimited 00:34:09.627 Max Number of Namespaces: 0 00:34:09.627 Max Number of I/O Queues: 1024 00:34:09.627 NVMe Specification Version (VS): 1.3 00:34:09.627 NVMe Specification Version (Identify): 1.3 00:34:09.627 Maximum Queue Entries: 1024 00:34:09.627 Contiguous Queues Required: No 00:34:09.627 Arbitration Mechanisms Supported 00:34:09.627 Weighted Round Robin: Not Supported 00:34:09.627 Vendor Specific: Not Supported 00:34:09.627 Reset Timeout: 7500 ms 00:34:09.627 Doorbell Stride: 4 bytes 00:34:09.627 NVM Subsystem Reset: Not Supported 00:34:09.627 Command Sets Supported 00:34:09.627 NVM Command Set: Supported 00:34:09.627 Boot Partition: Not Supported 00:34:09.627 Memory Page Size Minimum: 4096 bytes 00:34:09.627 Memory Page Size Maximum: 4096 bytes 00:34:09.627 Persistent Memory Region: Not Supported 00:34:09.627 Optional Asynchronous Events Supported 00:34:09.627 Namespace Attribute Notices: Not Supported 00:34:09.627 Firmware Activation Notices: Not Supported 00:34:09.627 ANA Change Notices: Not Supported 00:34:09.627 PLE Aggregate Log Change Notices: Not Supported 00:34:09.627 LBA Status Info Alert Notices: Not Supported 00:34:09.627 EGE Aggregate Log Change Notices: Not Supported 00:34:09.627 Normal NVM Subsystem Shutdown event: Not Supported 00:34:09.627 Zone Descriptor Change Notices: Not Supported 00:34:09.627 Discovery Log Change Notices: Supported 00:34:09.627 Controller Attributes 00:34:09.627 128-bit Host Identifier: Not Supported 00:34:09.627 Non-Operational Permissive Mode: Not Supported 00:34:09.627 NVM Sets: Not Supported 00:34:09.627 Read Recovery Levels: Not Supported 00:34:09.627 Endurance Groups: Not Supported 00:34:09.627 Predictable Latency Mode: Not Supported 00:34:09.627 Traffic Based Keep ALive: Not Supported 00:34:09.627 Namespace Granularity: Not Supported 00:34:09.627 SQ Associations: Not Supported 00:34:09.627 UUID List: Not Supported 00:34:09.627 Multi-Domain Subsystem: Not Supported 00:34:09.627 Fixed Capacity Management: Not Supported 00:34:09.627 Variable Capacity Management: Not Supported 00:34:09.627 Delete Endurance Group: Not Supported 00:34:09.627 Delete NVM Set: Not Supported 00:34:09.627 Extended LBA Formats Supported: Not Supported 00:34:09.627 Flexible Data Placement Supported: Not Supported 00:34:09.627 00:34:09.627 Controller Memory Buffer Support 00:34:09.627 ================================ 00:34:09.627 Supported: No 00:34:09.627 00:34:09.627 Persistent Memory Region Support 00:34:09.627 ================================ 00:34:09.627 Supported: No 00:34:09.627 00:34:09.627 Admin Command Set Attributes 00:34:09.627 ============================ 00:34:09.627 Security Send/Receive: Not Supported 00:34:09.627 Format NVM: Not Supported 00:34:09.627 Firmware Activate/Download: Not Supported 00:34:09.627 Namespace Management: Not Supported 00:34:09.627 Device Self-Test: Not Supported 00:34:09.627 Directives: Not Supported 00:34:09.627 NVMe-MI: Not Supported 00:34:09.627 Virtualization Management: Not Supported 00:34:09.627 Doorbell Buffer Config: Not Supported 00:34:09.627 Get LBA Status Capability: Not Supported 00:34:09.627 Command & Feature Lockdown Capability: Not Supported 00:34:09.627 Abort Command Limit: 1 00:34:09.627 Async Event Request Limit: 1 00:34:09.627 Number of Firmware Slots: N/A 00:34:09.627 Firmware Slot 1 Read-Only: N/A 00:34:09.627 Firmware Activation Without Reset: N/A 00:34:09.627 Multiple Update Detection Support: N/A 00:34:09.627 Firmware Update Granularity: No Information Provided 00:34:09.627 Per-Namespace SMART Log: No 00:34:09.627 Asymmetric Namespace Access Log Page: Not Supported 00:34:09.627 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:09.627 Command Effects Log Page: Not Supported 00:34:09.627 Get Log Page Extended Data: Supported 00:34:09.627 Telemetry Log Pages: Not Supported 00:34:09.627 Persistent Event Log Pages: Not Supported 00:34:09.627 Supported Log Pages Log Page: May Support 00:34:09.627 Commands Supported & Effects Log Page: Not Supported 00:34:09.627 Feature Identifiers & Effects Log Page:May Support 00:34:09.627 NVMe-MI Commands & Effects Log Page: May Support 00:34:09.627 Data Area 4 for Telemetry Log: Not Supported 00:34:09.627 Error Log Page Entries Supported: 1 00:34:09.627 Keep Alive: Not Supported 00:34:09.627 00:34:09.627 NVM Command Set Attributes 00:34:09.627 ========================== 00:34:09.627 Submission Queue Entry Size 00:34:09.627 Max: 1 00:34:09.627 Min: 1 00:34:09.627 Completion Queue Entry Size 00:34:09.627 Max: 1 00:34:09.627 Min: 1 00:34:09.627 Number of Namespaces: 0 00:34:09.628 Compare Command: Not Supported 00:34:09.628 Write Uncorrectable Command: Not Supported 00:34:09.628 Dataset Management Command: Not Supported 00:34:09.628 Write Zeroes Command: Not Supported 00:34:09.628 Set Features Save Field: Not Supported 00:34:09.628 Reservations: Not Supported 00:34:09.628 Timestamp: Not Supported 00:34:09.628 Copy: Not Supported 00:34:09.628 Volatile Write Cache: Not Present 00:34:09.628 Atomic Write Unit (Normal): 1 00:34:09.628 Atomic Write Unit (PFail): 1 00:34:09.628 Atomic Compare & Write Unit: 1 00:34:09.628 Fused Compare & Write: Not Supported 00:34:09.628 Scatter-Gather List 00:34:09.628 SGL Command Set: Supported 00:34:09.628 SGL Keyed: Not Supported 00:34:09.628 SGL Bit Bucket Descriptor: Not Supported 00:34:09.628 SGL Metadata Pointer: Not Supported 00:34:09.628 Oversized SGL: Not Supported 00:34:09.628 SGL Metadata Address: Not Supported 00:34:09.628 SGL Offset: Supported 00:34:09.628 Transport SGL Data Block: Not Supported 00:34:09.628 Replay Protected Memory Block: Not Supported 00:34:09.628 00:34:09.628 Firmware Slot Information 00:34:09.628 ========================= 00:34:09.628 Active slot: 0 00:34:09.628 00:34:09.628 00:34:09.628 Error Log 00:34:09.628 ========= 00:34:09.628 00:34:09.628 Active Namespaces 00:34:09.628 ================= 00:34:09.628 Discovery Log Page 00:34:09.628 ================== 00:34:09.628 Generation Counter: 2 00:34:09.628 Number of Records: 2 00:34:09.628 Record Format: 0 00:34:09.628 00:34:09.628 Discovery Log Entry 0 00:34:09.628 ---------------------- 00:34:09.628 Transport Type: 3 (TCP) 00:34:09.628 Address Family: 1 (IPv4) 00:34:09.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:09.628 Entry Flags: 00:34:09.628 Duplicate Returned Information: 0 00:34:09.628 Explicit Persistent Connection Support for Discovery: 0 00:34:09.628 Transport Requirements: 00:34:09.628 Secure Channel: Not Specified 00:34:09.628 Port ID: 1 (0x0001) 00:34:09.628 Controller ID: 65535 (0xffff) 00:34:09.628 Admin Max SQ Size: 32 00:34:09.628 Transport Service Identifier: 4420 00:34:09.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:09.628 Transport Address: 10.0.0.1 00:34:09.628 Discovery Log Entry 1 00:34:09.628 ---------------------- 00:34:09.628 Transport Type: 3 (TCP) 00:34:09.628 Address Family: 1 (IPv4) 00:34:09.628 Subsystem Type: 2 (NVM Subsystem) 00:34:09.628 Entry Flags: 00:34:09.628 Duplicate Returned Information: 0 00:34:09.628 Explicit Persistent Connection Support for Discovery: 0 00:34:09.628 Transport Requirements: 00:34:09.628 Secure Channel: Not Specified 00:34:09.628 Port ID: 1 (0x0001) 00:34:09.628 Controller ID: 65535 (0xffff) 00:34:09.628 Admin Max SQ Size: 32 00:34:09.628 Transport Service Identifier: 4420 00:34:09.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:09.628 Transport Address: 10.0.0.1 00:34:09.628 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:09.628 get_feature(0x01) failed 00:34:09.628 get_feature(0x02) failed 00:34:09.628 get_feature(0x04) failed 00:34:09.628 ===================================================== 00:34:09.628 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:09.628 ===================================================== 00:34:09.628 Controller Capabilities/Features 00:34:09.628 ================================ 00:34:09.628 Vendor ID: 0000 00:34:09.628 Subsystem Vendor ID: 0000 00:34:09.628 Serial Number: 4afcb7561dd31c7d151c 00:34:09.628 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:09.628 Firmware Version: 6.8.9-20 00:34:09.628 Recommended Arb Burst: 6 00:34:09.628 IEEE OUI Identifier: 00 00 00 00:34:09.628 Multi-path I/O 00:34:09.628 May have multiple subsystem ports: Yes 00:34:09.628 May have multiple controllers: Yes 00:34:09.628 Associated with SR-IOV VF: No 00:34:09.628 Max Data Transfer Size: Unlimited 00:34:09.628 Max Number of Namespaces: 1024 00:34:09.628 Max Number of I/O Queues: 128 00:34:09.628 NVMe Specification Version (VS): 1.3 00:34:09.628 NVMe Specification Version (Identify): 1.3 00:34:09.628 Maximum Queue Entries: 1024 00:34:09.628 Contiguous Queues Required: No 00:34:09.628 Arbitration Mechanisms Supported 00:34:09.628 Weighted Round Robin: Not Supported 00:34:09.628 Vendor Specific: Not Supported 00:34:09.628 Reset Timeout: 7500 ms 00:34:09.628 Doorbell Stride: 4 bytes 00:34:09.628 NVM Subsystem Reset: Not Supported 00:34:09.628 Command Sets Supported 00:34:09.628 NVM Command Set: Supported 00:34:09.628 Boot Partition: Not Supported 00:34:09.628 Memory Page Size Minimum: 4096 bytes 00:34:09.628 Memory Page Size Maximum: 4096 bytes 00:34:09.628 Persistent Memory Region: Not Supported 00:34:09.628 Optional Asynchronous Events Supported 00:34:09.628 Namespace Attribute Notices: Supported 00:34:09.628 Firmware Activation Notices: Not Supported 00:34:09.628 ANA Change Notices: Supported 00:34:09.628 PLE Aggregate Log Change Notices: Not Supported 00:34:09.628 LBA Status Info Alert Notices: Not Supported 00:34:09.628 EGE Aggregate Log Change Notices: Not Supported 00:34:09.628 Normal NVM Subsystem Shutdown event: Not Supported 00:34:09.628 Zone Descriptor Change Notices: Not Supported 00:34:09.628 Discovery Log Change Notices: Not Supported 00:34:09.628 Controller Attributes 00:34:09.628 128-bit Host Identifier: Supported 00:34:09.628 Non-Operational Permissive Mode: Not Supported 00:34:09.628 NVM Sets: Not Supported 00:34:09.628 Read Recovery Levels: Not Supported 00:34:09.628 Endurance Groups: Not Supported 00:34:09.628 Predictable Latency Mode: Not Supported 00:34:09.628 Traffic Based Keep ALive: Supported 00:34:09.628 Namespace Granularity: Not Supported 00:34:09.628 SQ Associations: Not Supported 00:34:09.628 UUID List: Not Supported 00:34:09.628 Multi-Domain Subsystem: Not Supported 00:34:09.628 Fixed Capacity Management: Not Supported 00:34:09.628 Variable Capacity Management: Not Supported 00:34:09.628 Delete Endurance Group: Not Supported 00:34:09.628 Delete NVM Set: Not Supported 00:34:09.628 Extended LBA Formats Supported: Not Supported 00:34:09.628 Flexible Data Placement Supported: Not Supported 00:34:09.628 00:34:09.628 Controller Memory Buffer Support 00:34:09.628 ================================ 00:34:09.628 Supported: No 00:34:09.628 00:34:09.628 Persistent Memory Region Support 00:34:09.628 ================================ 00:34:09.628 Supported: No 00:34:09.628 00:34:09.628 Admin Command Set Attributes 00:34:09.628 ============================ 00:34:09.628 Security Send/Receive: Not Supported 00:34:09.628 Format NVM: Not Supported 00:34:09.628 Firmware Activate/Download: Not Supported 00:34:09.628 Namespace Management: Not Supported 00:34:09.628 Device Self-Test: Not Supported 00:34:09.628 Directives: Not Supported 00:34:09.628 NVMe-MI: Not Supported 00:34:09.628 Virtualization Management: Not Supported 00:34:09.628 Doorbell Buffer Config: Not Supported 00:34:09.628 Get LBA Status Capability: Not Supported 00:34:09.628 Command & Feature Lockdown Capability: Not Supported 00:34:09.628 Abort Command Limit: 4 00:34:09.628 Async Event Request Limit: 4 00:34:09.628 Number of Firmware Slots: N/A 00:34:09.628 Firmware Slot 1 Read-Only: N/A 00:34:09.628 Firmware Activation Without Reset: N/A 00:34:09.628 Multiple Update Detection Support: N/A 00:34:09.628 Firmware Update Granularity: No Information Provided 00:34:09.628 Per-Namespace SMART Log: Yes 00:34:09.628 Asymmetric Namespace Access Log Page: Supported 00:34:09.628 ANA Transition Time : 10 sec 00:34:09.628 00:34:09.628 Asymmetric Namespace Access Capabilities 00:34:09.628 ANA Optimized State : Supported 00:34:09.628 ANA Non-Optimized State : Supported 00:34:09.628 ANA Inaccessible State : Supported 00:34:09.628 ANA Persistent Loss State : Supported 00:34:09.628 ANA Change State : Supported 00:34:09.628 ANAGRPID is not changed : No 00:34:09.628 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:09.628 00:34:09.628 ANA Group Identifier Maximum : 128 00:34:09.628 Number of ANA Group Identifiers : 128 00:34:09.628 Max Number of Allowed Namespaces : 1024 00:34:09.628 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:09.628 Command Effects Log Page: Supported 00:34:09.628 Get Log Page Extended Data: Supported 00:34:09.628 Telemetry Log Pages: Not Supported 00:34:09.628 Persistent Event Log Pages: Not Supported 00:34:09.628 Supported Log Pages Log Page: May Support 00:34:09.628 Commands Supported & Effects Log Page: Not Supported 00:34:09.628 Feature Identifiers & Effects Log Page:May Support 00:34:09.628 NVMe-MI Commands & Effects Log Page: May Support 00:34:09.628 Data Area 4 for Telemetry Log: Not Supported 00:34:09.629 Error Log Page Entries Supported: 128 00:34:09.629 Keep Alive: Supported 00:34:09.629 Keep Alive Granularity: 1000 ms 00:34:09.629 00:34:09.629 NVM Command Set Attributes 00:34:09.629 ========================== 00:34:09.629 Submission Queue Entry Size 00:34:09.629 Max: 64 00:34:09.629 Min: 64 00:34:09.629 Completion Queue Entry Size 00:34:09.629 Max: 16 00:34:09.629 Min: 16 00:34:09.629 Number of Namespaces: 1024 00:34:09.629 Compare Command: Not Supported 00:34:09.629 Write Uncorrectable Command: Not Supported 00:34:09.629 Dataset Management Command: Supported 00:34:09.629 Write Zeroes Command: Supported 00:34:09.629 Set Features Save Field: Not Supported 00:34:09.629 Reservations: Not Supported 00:34:09.629 Timestamp: Not Supported 00:34:09.629 Copy: Not Supported 00:34:09.629 Volatile Write Cache: Present 00:34:09.629 Atomic Write Unit (Normal): 1 00:34:09.629 Atomic Write Unit (PFail): 1 00:34:09.629 Atomic Compare & Write Unit: 1 00:34:09.629 Fused Compare & Write: Not Supported 00:34:09.629 Scatter-Gather List 00:34:09.629 SGL Command Set: Supported 00:34:09.629 SGL Keyed: Not Supported 00:34:09.629 SGL Bit Bucket Descriptor: Not Supported 00:34:09.629 SGL Metadata Pointer: Not Supported 00:34:09.629 Oversized SGL: Not Supported 00:34:09.629 SGL Metadata Address: Not Supported 00:34:09.629 SGL Offset: Supported 00:34:09.629 Transport SGL Data Block: Not Supported 00:34:09.629 Replay Protected Memory Block: Not Supported 00:34:09.629 00:34:09.629 Firmware Slot Information 00:34:09.629 ========================= 00:34:09.629 Active slot: 0 00:34:09.629 00:34:09.629 Asymmetric Namespace Access 00:34:09.629 =========================== 00:34:09.629 Change Count : 0 00:34:09.629 Number of ANA Group Descriptors : 1 00:34:09.629 ANA Group Descriptor : 0 00:34:09.629 ANA Group ID : 1 00:34:09.629 Number of NSID Values : 1 00:34:09.629 Change Count : 0 00:34:09.629 ANA State : 1 00:34:09.629 Namespace Identifier : 1 00:34:09.629 00:34:09.629 Commands Supported and Effects 00:34:09.629 ============================== 00:34:09.629 Admin Commands 00:34:09.629 -------------- 00:34:09.629 Get Log Page (02h): Supported 00:34:09.629 Identify (06h): Supported 00:34:09.629 Abort (08h): Supported 00:34:09.629 Set Features (09h): Supported 00:34:09.629 Get Features (0Ah): Supported 00:34:09.629 Asynchronous Event Request (0Ch): Supported 00:34:09.629 Keep Alive (18h): Supported 00:34:09.629 I/O Commands 00:34:09.629 ------------ 00:34:09.629 Flush (00h): Supported 00:34:09.629 Write (01h): Supported LBA-Change 00:34:09.629 Read (02h): Supported 00:34:09.629 Write Zeroes (08h): Supported LBA-Change 00:34:09.629 Dataset Management (09h): Supported 00:34:09.629 00:34:09.629 Error Log 00:34:09.629 ========= 00:34:09.629 Entry: 0 00:34:09.629 Error Count: 0x3 00:34:09.629 Submission Queue Id: 0x0 00:34:09.629 Command Id: 0x5 00:34:09.629 Phase Bit: 0 00:34:09.629 Status Code: 0x2 00:34:09.629 Status Code Type: 0x0 00:34:09.629 Do Not Retry: 1 00:34:09.629 Error Location: 0x28 00:34:09.629 LBA: 0x0 00:34:09.629 Namespace: 0x0 00:34:09.629 Vendor Log Page: 0x0 00:34:09.629 ----------- 00:34:09.629 Entry: 1 00:34:09.629 Error Count: 0x2 00:34:09.629 Submission Queue Id: 0x0 00:34:09.629 Command Id: 0x5 00:34:09.629 Phase Bit: 0 00:34:09.629 Status Code: 0x2 00:34:09.629 Status Code Type: 0x0 00:34:09.629 Do Not Retry: 1 00:34:09.629 Error Location: 0x28 00:34:09.629 LBA: 0x0 00:34:09.629 Namespace: 0x0 00:34:09.629 Vendor Log Page: 0x0 00:34:09.629 ----------- 00:34:09.629 Entry: 2 00:34:09.629 Error Count: 0x1 00:34:09.629 Submission Queue Id: 0x0 00:34:09.629 Command Id: 0x4 00:34:09.629 Phase Bit: 0 00:34:09.629 Status Code: 0x2 00:34:09.629 Status Code Type: 0x0 00:34:09.629 Do Not Retry: 1 00:34:09.629 Error Location: 0x28 00:34:09.629 LBA: 0x0 00:34:09.629 Namespace: 0x0 00:34:09.629 Vendor Log Page: 0x0 00:34:09.629 00:34:09.629 Number of Queues 00:34:09.629 ================ 00:34:09.629 Number of I/O Submission Queues: 128 00:34:09.629 Number of I/O Completion Queues: 128 00:34:09.629 00:34:09.629 ZNS Specific Controller Data 00:34:09.629 ============================ 00:34:09.629 Zone Append Size Limit: 0 00:34:09.629 00:34:09.629 00:34:09.629 Active Namespaces 00:34:09.629 ================= 00:34:09.629 get_feature(0x05) failed 00:34:09.629 Namespace ID:1 00:34:09.629 Command Set Identifier: NVM (00h) 00:34:09.629 Deallocate: Supported 00:34:09.629 Deallocated/Unwritten Error: Not Supported 00:34:09.629 Deallocated Read Value: Unknown 00:34:09.629 Deallocate in Write Zeroes: Not Supported 00:34:09.629 Deallocated Guard Field: 0xFFFF 00:34:09.629 Flush: Supported 00:34:09.629 Reservation: Not Supported 00:34:09.629 Namespace Sharing Capabilities: Multiple Controllers 00:34:09.629 Size (in LBAs): 1953525168 (931GiB) 00:34:09.629 Capacity (in LBAs): 1953525168 (931GiB) 00:34:09.629 Utilization (in LBAs): 1953525168 (931GiB) 00:34:09.629 UUID: 644f173c-31e4-4136-b702-cd8801cb2291 00:34:09.629 Thin Provisioning: Not Supported 00:34:09.629 Per-NS Atomic Units: Yes 00:34:09.629 Atomic Boundary Size (Normal): 0 00:34:09.629 Atomic Boundary Size (PFail): 0 00:34:09.629 Atomic Boundary Offset: 0 00:34:09.629 NGUID/EUI64 Never Reused: No 00:34:09.629 ANA group ID: 1 00:34:09.629 Namespace Write Protected: No 00:34:09.629 Number of LBA Formats: 1 00:34:09.629 Current LBA Format: LBA Format #00 00:34:09.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:09.629 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.629 rmmod nvme_tcp 00:34:09.629 rmmod nvme_fabrics 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.629 01:01:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:12.170 01:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:13.109 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:13.109 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:13.109 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:14.052 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:14.311 00:34:14.311 real 0m9.771s 00:34:14.311 user 0m2.076s 00:34:14.311 sys 0m3.597s 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.311 ************************************ 00:34:14.311 END TEST nvmf_identify_kernel_target 00:34:14.311 ************************************ 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.311 ************************************ 00:34:14.311 START TEST nvmf_auth_host 00:34:14.311 ************************************ 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:14.311 * Looking for test storage... 00:34:14.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:14.311 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.570 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.571 --rc genhtml_branch_coverage=1 00:34:14.571 --rc genhtml_function_coverage=1 00:34:14.571 --rc genhtml_legend=1 00:34:14.571 --rc geninfo_all_blocks=1 00:34:14.571 --rc geninfo_unexecuted_blocks=1 00:34:14.571 00:34:14.571 ' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.571 --rc genhtml_branch_coverage=1 00:34:14.571 --rc genhtml_function_coverage=1 00:34:14.571 --rc genhtml_legend=1 00:34:14.571 --rc geninfo_all_blocks=1 00:34:14.571 --rc geninfo_unexecuted_blocks=1 00:34:14.571 00:34:14.571 ' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.571 --rc genhtml_branch_coverage=1 00:34:14.571 --rc genhtml_function_coverage=1 00:34:14.571 --rc genhtml_legend=1 00:34:14.571 --rc geninfo_all_blocks=1 00:34:14.571 --rc geninfo_unexecuted_blocks=1 00:34:14.571 00:34:14.571 ' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.571 --rc genhtml_branch_coverage=1 00:34:14.571 --rc genhtml_function_coverage=1 00:34:14.571 --rc genhtml_legend=1 00:34:14.571 --rc geninfo_all_blocks=1 00:34:14.571 --rc geninfo_unexecuted_blocks=1 00:34:14.571 00:34:14.571 ' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.571 01:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:17.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:17.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:17.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:17.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:34:17.111 00:34:17.111 --- 10.0.0.2 ping statistics --- 00:34:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.111 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:34:17.111 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:17.111 00:34:17.111 --- 10.0.0.1 ping statistics --- 00:34:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.111 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=394445 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 394445 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 394445 ']' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.112 01:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d26034a82fb749b66b9ffda079a76583 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yio 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d26034a82fb749b66b9ffda079a76583 0 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d26034a82fb749b66b9ffda079a76583 0 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d26034a82fb749b66b9ffda079a76583 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:17.112 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yio 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yio 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yio 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:17.371 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1aaacca4b2241ecf9c6055b71c052716dbc8e5e345106d3d4582af42df1d582b 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YeE 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1aaacca4b2241ecf9c6055b71c052716dbc8e5e345106d3d4582af42df1d582b 3 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1aaacca4b2241ecf9c6055b71c052716dbc8e5e345106d3d4582af42df1d582b 3 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1aaacca4b2241ecf9c6055b71c052716dbc8e5e345106d3d4582af42df1d582b 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YeE 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YeE 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YeE 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91883bbc93681b5a5bc2333468cb79267702006f5de0cfe1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NfO 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91883bbc93681b5a5bc2333468cb79267702006f5de0cfe1 0 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91883bbc93681b5a5bc2333468cb79267702006f5de0cfe1 0 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91883bbc93681b5a5bc2333468cb79267702006f5de0cfe1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NfO 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NfO 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NfO 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f709f9f199bb088fd2c1cfec0325ebb62e223c8a568f48e6 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oEA 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f709f9f199bb088fd2c1cfec0325ebb62e223c8a568f48e6 2 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f709f9f199bb088fd2c1cfec0325ebb62e223c8a568f48e6 2 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f709f9f199bb088fd2c1cfec0325ebb62e223c8a568f48e6 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oEA 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oEA 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oEA 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80c32f1b8db10ee613fa1f4d1565c894 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.INp 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80c32f1b8db10ee613fa1f4d1565c894 1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80c32f1b8db10ee613fa1f4d1565c894 1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80c32f1b8db10ee613fa1f4d1565c894 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.INp 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.INp 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.INp 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cba42601928ae1fd88dd980e41f3ae26 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BPQ 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cba42601928ae1fd88dd980e41f3ae26 1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cba42601928ae1fd88dd980e41f3ae26 1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cba42601928ae1fd88dd980e41f3ae26 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:17.372 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BPQ 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BPQ 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BPQ 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b6483bc9baf313d755ef4f87262bb77cd0775d3bea0801f 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eig 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b6483bc9baf313d755ef4f87262bb77cd0775d3bea0801f 2 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b6483bc9baf313d755ef4f87262bb77cd0775d3bea0801f 2 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b6483bc9baf313d755ef4f87262bb77cd0775d3bea0801f 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eig 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eig 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eig 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.631 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8cba41d80ff2d7a64ee8d8c213f5862b 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1Yc 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8cba41d80ff2d7a64ee8d8c213f5862b 0 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8cba41d80ff2d7a64ee8d8c213f5862b 0 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8cba41d80ff2d7a64ee8d8c213f5862b 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1Yc 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1Yc 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1Yc 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80edd11b8c1f104ec3d2ced5711714f95ecb61d82cbd2c4673c97891d4383db5 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.n2g 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80edd11b8c1f104ec3d2ced5711714f95ecb61d82cbd2c4673c97891d4383db5 3 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80edd11b8c1f104ec3d2ced5711714f95ecb61d82cbd2c4673c97891d4383db5 3 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80edd11b8c1f104ec3d2ced5711714f95ecb61d82cbd2c4673c97891d4383db5 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.n2g 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.n2g 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.n2g 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 394445 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 394445 ']' 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.632 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yio 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.890 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YeE ]] 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YeE 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NfO 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oEA ]] 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oEA 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.INp 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BPQ ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BPQ 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eig 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1Yc ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1Yc 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.n2g 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.891 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:18.150 01:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:19.086 Waiting for block devices as requested 00:34:19.086 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:19.343 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:19.344 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:19.600 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:19.600 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:19.600 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:19.600 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:19.600 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:19.858 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:19.858 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:19.858 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:19.858 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:20.117 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:20.117 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:20.117 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:20.117 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:20.378 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:20.638 No valid GPT data, bailing 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:20.638 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:20.898 00:34:20.898 Discovery Log Number of Records 2, Generation counter 2 00:34:20.898 =====Discovery Log Entry 0====== 00:34:20.898 trtype: tcp 00:34:20.898 adrfam: ipv4 00:34:20.898 subtype: current discovery subsystem 00:34:20.898 treq: not specified, sq flow control disable supported 00:34:20.898 portid: 1 00:34:20.898 trsvcid: 4420 00:34:20.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:20.898 traddr: 10.0.0.1 00:34:20.898 eflags: none 00:34:20.898 sectype: none 00:34:20.898 =====Discovery Log Entry 1====== 00:34:20.898 trtype: tcp 00:34:20.898 adrfam: ipv4 00:34:20.898 subtype: nvme subsystem 00:34:20.898 treq: not specified, sq flow control disable supported 00:34:20.898 portid: 1 00:34:20.898 trsvcid: 4420 00:34:20.898 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:20.898 traddr: 10.0.0.1 00:34:20.898 eflags: none 00:34:20.898 sectype: none 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.898 01:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.156 nvme0n1 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.156 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.157 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.415 nvme0n1 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.415 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.416 nvme0n1 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.416 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.674 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.675 nvme0n1 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:21.675 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.934 nvme0n1 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.934 01:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.934 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.935 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.195 nvme0n1 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.195 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.456 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.716 nvme0n1 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.716 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.717 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.977 nvme0n1 00:34:22.977 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.977 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.978 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.978 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.978 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.978 01:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.978 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.238 nvme0n1 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.238 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.498 nvme0n1 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.498 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.755 nvme0n1 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:23.755 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.756 01:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.322 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.581 nvme0n1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.581 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.839 nvme0n1 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.839 01:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.100 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 nvme0n1 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.360 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.621 nvme0n1 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.621 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.881 nvme0n1 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.881 01:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.881 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.881 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.881 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.881 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.139 01:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.040 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.041 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.041 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.041 01:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 nvme0n1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.869 nvme0n1 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.869 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.870 01:01:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.437 nvme0n1 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.437 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.438 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.005 nvme0n1 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.005 01:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.005 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.574 nvme0n1 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.574 01:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.511 nvme0n1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.511 01:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.442 nvme0n1 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.442 01:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.379 nvme0n1 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.379 01:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.322 nvme0n1 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.322 01:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.262 nvme0n1 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.262 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.263 nvme0n1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.263 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.524 nvme0n1 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.524 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.784 nvme0n1 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.784 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.785 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 nvme0n1 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.044 01:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 nvme0n1 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.044 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:36.303 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.304 nvme0n1 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.304 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.563 nvme0n1 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.563 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.822 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.823 nvme0n1 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.823 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.081 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.081 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.081 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.081 01:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.081 nvme0n1 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.081 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.366 nvme0n1 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.366 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.627 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.886 nvme0n1 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.886 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.887 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.887 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.887 01:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.147 nvme0n1 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.147 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.148 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.407 nvme0n1 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.407 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.665 nvme0n1 00:34:38.665 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.665 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.665 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.665 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.665 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.923 01:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.180 nvme0n1 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:39.180 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.181 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.754 nvme0n1 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.754 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.755 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.755 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.755 01:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.321 nvme0n1 00:34:40.321 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.322 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.886 nvme0n1 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.886 01:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.450 nvme0n1 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.450 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.708 nvme0n1 00:34:41.708 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.708 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.967 01:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.903 nvme0n1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.903 01:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.955 nvme0n1 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.955 01:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 nvme0n1 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.785 01:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.727 nvme0n1 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.727 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.728 01:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.668 nvme0n1 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.668 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.669 nvme0n1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.669 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.928 nvme0n1 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.928 01:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.928 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.928 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.928 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:46.928 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.929 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.188 nvme0n1 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.188 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.447 nvme0n1 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.447 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.708 nvme0n1 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.708 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.709 nvme0n1 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.709 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.968 01:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.968 nvme0n1 00:34:47.968 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.968 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.968 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.968 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.968 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.228 nvme0n1 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.228 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.487 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.488 nvme0n1 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.488 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.747 nvme0n1 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.747 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.748 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.748 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.008 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.009 01:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.269 nvme0n1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.269 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.530 nvme0n1 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.530 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.788 nvme0n1 00:34:49.788 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.788 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.788 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.788 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.788 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.048 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.049 01:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.049 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.309 nvme0n1 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.309 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.310 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.569 nvme0n1 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.569 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.570 01:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 nvme0n1 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.139 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.140 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.706 nvme0n1 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.706 01:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.276 nvme0n1 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.276 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.277 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.845 nvme0n1 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.845 01:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.415 nvme0n1 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.415 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDI2MDM0YTgyZmI3NDliNjZiOWZmZGEwNzlhNzY1ODOGuHD+: 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: ]] 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWFhYWNjYTRiMjI0MWVjZjljNjA1NWI3MWMwNTI3MTZkYmM4ZTVlMzQ1MTA2ZDNkNDU4MmFmNDJkZjFkNTgyYnQtprc=: 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.416 01:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.353 nvme0n1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.353 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.354 01:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.294 nvme0n1 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:55.294 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.295 01:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.229 nvme0n1 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.229 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmI2NDgzYmM5YmFmMzEzZDc1NWVmNGY4NzI2MmJiNzdjZDA3NzVkM2JlYTA4MDFmeLh7WQ==: 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: ]] 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGNiYTQxZDgwZmYyZDdhNjRlZThkOGMyMTNmNTg2MmKEjt3T: 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.230 01:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.172 nvme0n1 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODBlZGQxMWI4YzFmMTA0ZWMzZDJjZWQ1NzExNzE0Zjk1ZWNiNjFkODJjYmQyYzQ2NzNjOTc4OTFkNDM4M2RiNbUFBzU=: 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.173 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 nvme0n1 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 01:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:58.111 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.112 request: 00:34:58.112 { 00:34:58.112 "name": "nvme0", 00:34:58.112 "trtype": "tcp", 00:34:58.112 "traddr": "10.0.0.1", 00:34:58.112 "adrfam": "ipv4", 00:34:58.112 "trsvcid": "4420", 00:34:58.112 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:58.112 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:58.112 "prchk_reftag": false, 00:34:58.112 "prchk_guard": false, 00:34:58.112 "hdgst": false, 00:34:58.112 "ddgst": false, 00:34:58.112 "allow_unrecognized_csi": false, 00:34:58.112 "method": "bdev_nvme_attach_controller", 00:34:58.112 "req_id": 1 00:34:58.112 } 00:34:58.112 Got JSON-RPC error response 00:34:58.112 response: 00:34:58.112 { 00:34:58.112 "code": -5, 00:34:58.112 "message": "Input/output error" 00:34:58.112 } 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.112 request: 00:34:58.112 { 00:34:58.112 "name": "nvme0", 00:34:58.112 "trtype": "tcp", 00:34:58.112 "traddr": "10.0.0.1", 00:34:58.112 "adrfam": "ipv4", 00:34:58.112 "trsvcid": "4420", 00:34:58.112 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:58.112 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:58.112 "prchk_reftag": false, 00:34:58.112 "prchk_guard": false, 00:34:58.112 "hdgst": false, 00:34:58.112 "ddgst": false, 00:34:58.112 "dhchap_key": "key2", 00:34:58.112 "allow_unrecognized_csi": false, 00:34:58.112 "method": "bdev_nvme_attach_controller", 00:34:58.112 "req_id": 1 00:34:58.112 } 00:34:58.112 Got JSON-RPC error response 00:34:58.112 response: 00:34:58.112 { 00:34:58.112 "code": -5, 00:34:58.112 "message": "Input/output error" 00:34:58.112 } 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.112 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 request: 00:34:58.374 { 00:34:58.374 "name": "nvme0", 00:34:58.374 "trtype": "tcp", 00:34:58.374 "traddr": "10.0.0.1", 00:34:58.374 "adrfam": "ipv4", 00:34:58.374 "trsvcid": "4420", 00:34:58.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:58.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:58.374 "prchk_reftag": false, 00:34:58.374 "prchk_guard": false, 00:34:58.374 "hdgst": false, 00:34:58.374 "ddgst": false, 00:34:58.374 "dhchap_key": "key1", 00:34:58.374 "dhchap_ctrlr_key": "ckey2", 00:34:58.374 "allow_unrecognized_csi": false, 00:34:58.374 "method": "bdev_nvme_attach_controller", 00:34:58.374 "req_id": 1 00:34:58.374 } 00:34:58.374 Got JSON-RPC error response 00:34:58.374 response: 00:34:58.374 { 00:34:58.374 "code": -5, 00:34:58.374 "message": "Input/output error" 00:34:58.374 } 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 nvme0n1 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.374 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.635 request: 00:34:58.635 { 00:34:58.635 "name": "nvme0", 00:34:58.635 "dhchap_key": "key1", 00:34:58.635 "dhchap_ctrlr_key": "ckey2", 00:34:58.635 "method": "bdev_nvme_set_keys", 00:34:58.635 "req_id": 1 00:34:58.635 } 00:34:58.635 Got JSON-RPC error response 00:34:58.635 response: 00:34:58.635 { 00:34:58.635 "code": -13, 00:34:58.635 "message": "Permission denied" 00:34:58.635 } 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:58.635 01:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:59.575 01:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTE4ODNiYmM5MzY4MWI1YTViYzIzMzM0NjhjYjc5MjY3NzAyMDA2ZjVkZTBjZmUxzoNGEA==: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjcwOWY5ZjE5OWJiMDg4ZmQyYzFjZmVjMDMyNWViYjYyZTIyM2M4YTU2OGY0OGU21D0TAA==: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.951 nvme0n1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODBjMzJmMWI4ZGIxMGVlNjEzZmExZjRkMTU2NWM4OTRSiyi2: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: ]] 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2JhNDI2MDE5MjhhZTFmZDg4ZGQ5ODBlNDFmM2FlMjYUdaNg: 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.951 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.951 request: 00:35:00.951 { 00:35:00.951 "name": "nvme0", 00:35:00.951 "dhchap_key": "key2", 00:35:00.951 "dhchap_ctrlr_key": "ckey1", 00:35:00.951 "method": "bdev_nvme_set_keys", 00:35:00.951 "req_id": 1 00:35:00.951 } 00:35:00.951 Got JSON-RPC error response 00:35:00.951 response: 00:35:00.951 { 00:35:00.952 "code": -13, 00:35:00.952 "message": "Permission denied" 00:35:00.952 } 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.952 01:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.952 01:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:00.952 01:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:01.891 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.891 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:01.891 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.891 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.891 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.152 rmmod nvme_tcp 00:35:02.152 rmmod nvme_fabrics 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 394445 ']' 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 394445 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 394445 ']' 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 394445 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394445 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394445' 00:35:02.152 killing process with pid 394445 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 394445 00:35:02.152 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 394445 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.412 01:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:04.326 01:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:05.708 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:05.708 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:05.708 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:06.651 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:06.651 01:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yio /tmp/spdk.key-null.NfO /tmp/spdk.key-sha256.INp /tmp/spdk.key-sha384.eig /tmp/spdk.key-sha512.n2g /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:06.651 01:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:08.036 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:08.036 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:08.036 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:08.037 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:08.037 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:08.037 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:08.037 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:08.037 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:08.037 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:08.037 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:08.037 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:08.037 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:08.037 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:08.037 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:08.037 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:08.037 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:08.037 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:08.037 00:35:08.037 real 0m53.632s 00:35:08.037 user 0m51.321s 00:35:08.037 sys 0m6.069s 00:35:08.037 01:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.037 01:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.037 ************************************ 00:35:08.037 END TEST nvmf_auth_host 00:35:08.037 ************************************ 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.037 ************************************ 00:35:08.037 START TEST nvmf_digest 00:35:08.037 ************************************ 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:08.037 * Looking for test storage... 00:35:08.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.037 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.296 --rc genhtml_branch_coverage=1 00:35:08.296 --rc genhtml_function_coverage=1 00:35:08.296 --rc genhtml_legend=1 00:35:08.296 --rc geninfo_all_blocks=1 00:35:08.296 --rc geninfo_unexecuted_blocks=1 00:35:08.296 00:35:08.296 ' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.296 --rc genhtml_branch_coverage=1 00:35:08.296 --rc genhtml_function_coverage=1 00:35:08.296 --rc genhtml_legend=1 00:35:08.296 --rc geninfo_all_blocks=1 00:35:08.296 --rc geninfo_unexecuted_blocks=1 00:35:08.296 00:35:08.296 ' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.296 --rc genhtml_branch_coverage=1 00:35:08.296 --rc genhtml_function_coverage=1 00:35:08.296 --rc genhtml_legend=1 00:35:08.296 --rc geninfo_all_blocks=1 00:35:08.296 --rc geninfo_unexecuted_blocks=1 00:35:08.296 00:35:08.296 ' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.296 --rc genhtml_branch_coverage=1 00:35:08.296 --rc genhtml_function_coverage=1 00:35:08.296 --rc genhtml_legend=1 00:35:08.296 --rc geninfo_all_blocks=1 00:35:08.296 --rc geninfo_unexecuted_blocks=1 00:35:08.296 00:35:08.296 ' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:08.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.296 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.297 01:02:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:10.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:10.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:10.833 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:10.833 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:10.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:10.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:10.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:35:10.834 00:35:10.834 --- 10.0.0.2 ping statistics --- 00:35:10.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.834 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:10.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:10.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:35:10.834 00:35:10.834 --- 10.0.0.1 ping statistics --- 00:35:10.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:10.834 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.834 ************************************ 00:35:10.834 START TEST nvmf_digest_clean 00:35:10.834 ************************************ 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=404446 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 404446 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 404446 ']' 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.834 [2024-12-07 01:02:26.694028] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:10.834 [2024-12-07 01:02:26.694116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.834 [2024-12-07 01:02:26.765967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.834 [2024-12-07 01:02:26.811031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.834 [2024-12-07 01:02:26.811087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.834 [2024-12-07 01:02:26.811115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.834 [2024-12-07 01:02:26.811127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.834 [2024-12-07 01:02:26.811136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.834 [2024-12-07 01:02:26.811693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.834 01:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.095 null0 00:35:11.095 [2024-12-07 01:02:27.056155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.095 [2024-12-07 01:02:27.080387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=404471 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 404471 /var/tmp/bperf.sock 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 404471 ']' 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.095 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.095 [2024-12-07 01:02:27.131780] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:11.095 [2024-12-07 01:02:27.131854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404471 ] 00:35:11.095 [2024-12-07 01:02:27.199674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.354 [2024-12-07 01:02:27.248189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.354 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.354 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:11.354 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:11.354 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:11.354 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:11.612 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.612 01:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.182 nvme0n1 00:35:12.182 01:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:12.182 01:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.441 Running I/O for 2 seconds... 00:35:14.317 18776.00 IOPS, 73.34 MiB/s [2024-12-07T00:02:30.468Z] 18553.50 IOPS, 72.47 MiB/s 00:35:14.317 Latency(us) 00:35:14.317 [2024-12-07T00:02:30.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.317 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:14.317 nvme0n1 : 2.05 18193.47 71.07 0.00 0.00 6890.49 3422.44 46797.56 00:35:14.317 [2024-12-07T00:02:30.468Z] =================================================================================================================== 00:35:14.317 [2024-12-07T00:02:30.468Z] Total : 18193.47 71.07 0.00 0.00 6890.49 3422.44 46797.56 00:35:14.317 { 00:35:14.317 "results": [ 00:35:14.317 { 00:35:14.317 "job": "nvme0n1", 00:35:14.317 "core_mask": "0x2", 00:35:14.317 "workload": "randread", 00:35:14.317 "status": "finished", 00:35:14.317 "queue_depth": 128, 00:35:14.317 "io_size": 4096, 00:35:14.317 "runtime": 2.046613, 00:35:14.317 "iops": 18193.4738028147, 00:35:14.317 "mibps": 71.06825704224492, 00:35:14.317 "io_failed": 0, 00:35:14.317 "io_timeout": 0, 00:35:14.317 "avg_latency_us": 6890.489048356533, 00:35:14.317 "min_latency_us": 3422.4355555555558, 00:35:14.317 "max_latency_us": 46797.55851851852 00:35:14.317 } 00:35:14.317 ], 00:35:14.317 "core_count": 1 00:35:14.317 } 00:35:14.317 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:14.317 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:14.317 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:14.317 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:14.317 | select(.opcode=="crc32c") 00:35:14.317 | "\(.module_name) \(.executed)"' 00:35:14.317 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:14.884 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 404471 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 404471 ']' 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 404471 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404471 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404471' 00:35:14.885 killing process with pid 404471 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 404471 00:35:14.885 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.885 00:35:14.885 Latency(us) 00:35:14.885 [2024-12-07T00:02:31.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.885 [2024-12-07T00:02:31.036Z] =================================================================================================================== 00:35:14.885 [2024-12-07T00:02:31.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 404471 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=405003 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 405003 /var/tmp/bperf.sock 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 405003 ']' 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.885 01:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:14.885 [2024-12-07 01:02:30.993952] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:14.885 [2024-12-07 01:02:30.994055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405003 ] 00:35:14.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.885 Zero copy mechanism will not be used. 00:35:15.143 [2024-12-07 01:02:31.060261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.143 [2024-12-07 01:02:31.105055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.143 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.143 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:15.143 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:15.143 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:15.143 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.709 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.709 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.969 nvme0n1 00:35:15.969 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:15.969 01:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.969 Zero copy mechanism will not be used. 00:35:15.969 Running I/O for 2 seconds... 00:35:18.287 5743.00 IOPS, 717.88 MiB/s [2024-12-07T00:02:34.438Z] 6006.00 IOPS, 750.75 MiB/s 00:35:18.287 Latency(us) 00:35:18.287 [2024-12-07T00:02:34.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.287 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:18.287 nvme0n1 : 2.00 6005.21 750.65 0.00 0.00 2660.12 682.67 8835.22 00:35:18.287 [2024-12-07T00:02:34.438Z] =================================================================================================================== 00:35:18.287 [2024-12-07T00:02:34.438Z] Total : 6005.21 750.65 0.00 0.00 2660.12 682.67 8835.22 00:35:18.287 { 00:35:18.287 "results": [ 00:35:18.287 { 00:35:18.287 "job": "nvme0n1", 00:35:18.287 "core_mask": "0x2", 00:35:18.287 "workload": "randread", 00:35:18.287 "status": "finished", 00:35:18.287 "queue_depth": 16, 00:35:18.287 "io_size": 131072, 00:35:18.287 "runtime": 2.002927, 00:35:18.287 "iops": 6005.211373155387, 00:35:18.287 "mibps": 750.6514216444234, 00:35:18.287 "io_failed": 0, 00:35:18.287 "io_timeout": 0, 00:35:18.287 "avg_latency_us": 2660.1224264370785, 00:35:18.287 "min_latency_us": 682.6666666666666, 00:35:18.287 "max_latency_us": 8835.223703703703 00:35:18.287 } 00:35:18.287 ], 00:35:18.287 "core_count": 1 00:35:18.287 } 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:18.287 | select(.opcode=="crc32c") 00:35:18.287 | "\(.module_name) \(.executed)"' 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 405003 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 405003 ']' 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 405003 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405003 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405003' 00:35:18.287 killing process with pid 405003 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 405003 00:35:18.287 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.287 00:35:18.287 Latency(us) 00:35:18.287 [2024-12-07T00:02:34.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.287 [2024-12-07T00:02:34.438Z] =================================================================================================================== 00:35:18.287 [2024-12-07T00:02:34.438Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.287 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 405003 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=405403 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 405403 /var/tmp/bperf.sock 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 405403 ']' 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.546 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.546 [2024-12-07 01:02:34.609806] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:18.546 [2024-12-07 01:02:34.609902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405403 ] 00:35:18.546 [2024-12-07 01:02:34.676942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.803 [2024-12-07 01:02:34.720910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.803 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.803 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:18.803 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:18.803 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:18.803 01:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:19.060 01:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.060 01:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.626 nvme0n1 00:35:19.626 01:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:19.626 01:02:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.626 Running I/O for 2 seconds... 00:35:21.951 19323.00 IOPS, 75.48 MiB/s [2024-12-07T00:02:38.102Z] 18941.50 IOPS, 73.99 MiB/s 00:35:21.951 Latency(us) 00:35:21.951 [2024-12-07T00:02:38.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.951 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.951 nvme0n1 : 2.01 18943.83 74.00 0.00 0.00 6741.62 2767.08 9077.95 00:35:21.951 [2024-12-07T00:02:38.102Z] =================================================================================================================== 00:35:21.951 [2024-12-07T00:02:38.102Z] Total : 18943.83 74.00 0.00 0.00 6741.62 2767.08 9077.95 00:35:21.951 { 00:35:21.951 "results": [ 00:35:21.951 { 00:35:21.951 "job": "nvme0n1", 00:35:21.951 "core_mask": "0x2", 00:35:21.951 "workload": "randwrite", 00:35:21.951 "status": "finished", 00:35:21.951 "queue_depth": 128, 00:35:21.951 "io_size": 4096, 00:35:21.951 "runtime": 2.006511, 00:35:21.951 "iops": 18943.828366752037, 00:35:21.951 "mibps": 73.99932955762515, 00:35:21.951 "io_failed": 0, 00:35:21.951 "io_timeout": 0, 00:35:21.951 "avg_latency_us": 6741.62015993421, 00:35:21.951 "min_latency_us": 2767.0755555555556, 00:35:21.951 "max_latency_us": 9077.94962962963 00:35:21.951 } 00:35:21.951 ], 00:35:21.951 "core_count": 1 00:35:21.951 } 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:21.951 | select(.opcode=="crc32c") 00:35:21.951 | "\(.module_name) \(.executed)"' 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 405403 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 405403 ']' 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 405403 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.951 01:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405403 00:35:21.951 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.951 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.951 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405403' 00:35:21.951 killing process with pid 405403 00:35:21.951 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 405403 00:35:21.951 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.951 00:35:21.952 Latency(us) 00:35:21.952 [2024-12-07T00:02:38.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.952 [2024-12-07T00:02:38.103Z] =================================================================================================================== 00:35:21.952 [2024-12-07T00:02:38.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.952 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 405403 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=405812 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 405812 /var/tmp/bperf.sock 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 405812 ']' 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.211 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.212 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.212 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:22.212 [2024-12-07 01:02:38.262892] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:22.212 [2024-12-07 01:02:38.262985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405812 ] 00:35:22.212 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.212 Zero copy mechanism will not be used. 00:35:22.212 [2024-12-07 01:02:38.330093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.470 [2024-12-07 01:02:38.378553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.470 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.470 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:22.470 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:22.470 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:22.470 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:23.041 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.041 01:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.300 nvme0n1 00:35:23.300 01:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:23.300 01:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.560 Zero copy mechanism will not be used. 00:35:23.560 Running I/O for 2 seconds... 00:35:25.438 5729.00 IOPS, 716.12 MiB/s [2024-12-07T00:02:41.589Z] 5913.00 IOPS, 739.12 MiB/s 00:35:25.438 Latency(us) 00:35:25.438 [2024-12-07T00:02:41.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.438 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:25.438 nvme0n1 : 2.00 5909.68 738.71 0.00 0.00 2699.85 1941.81 8689.59 00:35:25.438 [2024-12-07T00:02:41.589Z] =================================================================================================================== 00:35:25.438 [2024-12-07T00:02:41.589Z] Total : 5909.68 738.71 0.00 0.00 2699.85 1941.81 8689.59 00:35:25.438 { 00:35:25.438 "results": [ 00:35:25.438 { 00:35:25.438 "job": "nvme0n1", 00:35:25.438 "core_mask": "0x2", 00:35:25.438 "workload": "randwrite", 00:35:25.438 "status": "finished", 00:35:25.438 "queue_depth": 16, 00:35:25.438 "io_size": 131072, 00:35:25.438 "runtime": 2.003662, 00:35:25.438 "iops": 5909.679377060602, 00:35:25.438 "mibps": 738.7099221325752, 00:35:25.438 "io_failed": 0, 00:35:25.438 "io_timeout": 0, 00:35:25.438 "avg_latency_us": 2699.8467671962144, 00:35:25.438 "min_latency_us": 1941.8074074074075, 00:35:25.438 "max_latency_us": 8689.588148148148 00:35:25.438 } 00:35:25.438 ], 00:35:25.438 "core_count": 1 00:35:25.438 } 00:35:25.438 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:25.438 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:25.438 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:25.438 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:25.438 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:25.438 | select(.opcode=="crc32c") 00:35:25.438 | "\(.module_name) \(.executed)"' 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 405812 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 405812 ']' 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 405812 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405812 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405812' 00:35:25.698 killing process with pid 405812 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 405812 00:35:25.698 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.698 00:35:25.698 Latency(us) 00:35:25.698 [2024-12-07T00:02:41.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.698 [2024-12-07T00:02:41.849Z] =================================================================================================================== 00:35:25.698 [2024-12-07T00:02:41.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.698 01:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 405812 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 404446 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 404446 ']' 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 404446 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404446 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404446' 00:35:25.958 killing process with pid 404446 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 404446 00:35:25.958 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 404446 00:35:26.218 00:35:26.218 real 0m15.626s 00:35:26.218 user 0m31.373s 00:35:26.218 sys 0m4.283s 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:26.218 ************************************ 00:35:26.218 END TEST nvmf_digest_clean 00:35:26.218 ************************************ 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:26.218 ************************************ 00:35:26.218 START TEST nvmf_digest_error 00:35:26.218 ************************************ 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=406367 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 406367 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 406367 ']' 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.218 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.478 [2024-12-07 01:02:42.368593] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:26.478 [2024-12-07 01:02:42.368700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.478 [2024-12-07 01:02:42.440517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.478 [2024-12-07 01:02:42.484332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.478 [2024-12-07 01:02:42.484389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.478 [2024-12-07 01:02:42.484412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.478 [2024-12-07 01:02:42.484423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.478 [2024-12-07 01:02:42.484433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.478 [2024-12-07 01:02:42.485018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.478 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.478 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:26.478 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.478 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.478 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.479 [2024-12-07 01:02:42.617747] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.479 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.738 null0 00:35:26.738 [2024-12-07 01:02:42.738674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.738 [2024-12-07 01:02:42.762903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=406388 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 406388 /var/tmp/bperf.sock 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 406388 ']' 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:26.738 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.739 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:26.739 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:26.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:26.739 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.739 01:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.739 [2024-12-07 01:02:42.815080] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:26.739 [2024-12-07 01:02:42.815167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406388 ] 00:35:26.739 [2024-12-07 01:02:42.882167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.997 [2024-12-07 01:02:42.929193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.997 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.997 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:26.997 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.997 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.256 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.517 nvme0n1 00:35:27.517 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:27.517 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.517 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.778 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.778 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:27.778 01:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:27.778 Running I/O for 2 seconds... 00:35:27.778 [2024-12-07 01:02:43.820772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.820824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.820861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.834482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.834515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.834532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.847863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.847897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.847929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.863651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.863701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.874692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.874723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.874741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.891869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.891903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.891922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.906672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.906720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.906738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.778 [2024-12-07 01:02:43.921243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:27.778 [2024-12-07 01:02:43.921279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.778 [2024-12-07 01:02:43.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:43.933193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:43.933226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:43.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:43.949373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:43.949404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:43.949422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:43.963848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:43.963879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:43.963896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:43.977647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:43.977680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:43.977699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:43.988850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:43.988897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:43.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.004955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.005013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.005034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.019748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.019784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.019814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.031653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.031702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.046143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.046175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.046193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.057793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.057825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.071782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.071813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.071830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.085569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.085616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.039 [2024-12-07 01:02:44.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.039 [2024-12-07 01:02:44.096909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.039 [2024-12-07 01:02:44.096941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.096958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.112780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.112811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.112828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.127959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.128015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.128035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.144184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.144232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.155635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.155671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.155688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.171660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.171692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.171725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.040 [2024-12-07 01:02:44.187495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.040 [2024-12-07 01:02:44.187527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.040 [2024-12-07 01:02:44.187544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.201102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.201136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.201155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.213761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.213808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.213825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.229616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.229662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.229680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.245880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.245912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.245930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.256233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.256266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.256300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.271569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.271601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.271619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.286467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.286498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.286515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.302614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.302645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.302663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.319700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.319732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.319765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.336824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.336856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.336874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.351973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.352016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.363646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.363677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.363694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.380941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.380976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.397488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.397520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.397537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.413846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.413877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.413905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.430289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.430335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.430352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.300 [2024-12-07 01:02:44.444992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.300 [2024-12-07 01:02:44.445041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.300 [2024-12-07 01:02:44.445068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.561 [2024-12-07 01:02:44.456899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.561 [2024-12-07 01:02:44.456934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.561 [2024-12-07 01:02:44.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.561 [2024-12-07 01:02:44.472366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.561 [2024-12-07 01:02:44.472398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.561 [2024-12-07 01:02:44.472416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.561 [2024-12-07 01:02:44.483800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.561 [2024-12-07 01:02:44.483832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.483850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.499517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.499554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.499572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.515805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.515837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.530402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.530435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.530454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.548026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.548065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.548083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.559008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.559041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.559060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.573521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.573554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.573572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.588917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.588949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.588982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.600824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.600877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.614566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.614599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.614616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.627384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.627417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.627436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.640294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.640341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.652769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.652802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.652826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.665623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.665655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.678346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.678393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.690965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.691005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.691041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.562 [2024-12-07 01:02:44.703594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.562 [2024-12-07 01:02:44.703626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.562 [2024-12-07 01:02:44.703645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.718463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.718497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.718516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.733446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.733480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.733500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.745514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.745548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.745580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.761120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.761154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.761173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.775825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.775863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.775881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.787968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.788024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.788044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 17758.00 IOPS, 69.37 MiB/s [2024-12-07T00:02:44.975Z] [2024-12-07 01:02:44.802817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.802864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.802882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.817032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.817065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.817084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.829267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.829315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.829334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.841818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.841865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.824 [2024-12-07 01:02:44.841883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.824 [2024-12-07 01:02:44.855671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.824 [2024-12-07 01:02:44.855703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.855721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.867017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.867050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.867068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.882662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.882695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.882729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.897522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.897574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.909350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.909381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.909398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.924397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.924433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.924451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.941067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.941116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.941135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.954620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.954652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.954671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.825 [2024-12-07 01:02:44.967544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:28.825 [2024-12-07 01:02:44.967578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.825 [2024-12-07 01:02:44.967596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:44.980102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:44.980137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:44.980155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:44.991578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:44.991609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:44.991626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.005856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.005887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.005911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.022672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.022706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.022723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.035404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.035442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.035462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.049766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.049799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.049817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.062723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.062782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.074327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.074358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.074375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.088346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.088393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.088410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.100628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.100663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.100682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.112628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.112660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.112677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.127328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.127361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.127379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.142642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.142674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.142692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.153836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.096 [2024-12-07 01:02:45.153867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.096 [2024-12-07 01:02:45.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.096 [2024-12-07 01:02:45.166212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.166245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.166269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.097 [2024-12-07 01:02:45.180305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.180337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.180355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.097 [2024-12-07 01:02:45.191656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.191690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.191708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.097 [2024-12-07 01:02:45.204467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.204501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.204519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.097 [2024-12-07 01:02:45.219607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.219639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.219656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.097 [2024-12-07 01:02:45.234812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.097 [2024-12-07 01:02:45.234845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.097 [2024-12-07 01:02:45.234869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.246444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.246479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.246512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.261225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.261256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.261288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.277280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.277326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.277342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.292425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.292457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.292475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.303499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.303529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.319428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.319458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.319475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.335977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.336029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.336049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.351761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.351793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.355 [2024-12-07 01:02:45.351810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.355 [2024-12-07 01:02:45.362718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.355 [2024-12-07 01:02:45.362755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.376821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.376852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.376869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.392491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.392539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.392558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.404735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.404765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.404782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.418506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.418541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.418560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.433810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.433843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.433861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.448719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.448752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.448769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.460939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.460970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.461010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.476739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.476798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.476816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.492013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.492049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.356 [2024-12-07 01:02:45.503035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.356 [2024-12-07 01:02:45.503067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.356 [2024-12-07 01:02:45.503084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.517484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.517515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.517532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.530597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.530630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.530648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.546947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.546977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.547001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.560856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.560889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.560907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.572064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.572095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.572111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.584722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.584752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.584769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.597511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.597541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.597563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.613530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.613560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.613577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.627936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.627968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.627985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.642402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.642434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.642452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.654252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.654286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.668248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.668281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.668313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.682912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.682945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.682963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.694689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.694735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.694752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.710284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.710332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.710350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.726297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.743013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.743046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.743063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.617 [2024-12-07 01:02:45.753660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.617 [2024-12-07 01:02:45.753704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.617 [2024-12-07 01:02:45.753722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.876 [2024-12-07 01:02:45.769538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.876 [2024-12-07 01:02:45.769569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.876 [2024-12-07 01:02:45.769586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.876 [2024-12-07 01:02:45.785593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.876 [2024-12-07 01:02:45.785624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.876 [2024-12-07 01:02:45.785640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.876 18073.00 IOPS, 70.60 MiB/s [2024-12-07T00:02:46.027Z] [2024-12-07 01:02:45.803053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a38a90) 00:35:29.876 [2024-12-07 01:02:45.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:29.876 [2024-12-07 01:02:45.803104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.876 00:35:29.876 Latency(us) 00:35:29.876 [2024-12-07T00:02:46.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.876 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:29.876 nvme0n1 : 2.01 18101.02 70.71 0.00 0.00 7064.50 3106.89 24272.59 00:35:29.876 [2024-12-07T00:02:46.027Z] =================================================================================================================== 00:35:29.876 [2024-12-07T00:02:46.027Z] Total : 18101.02 70.71 0.00 0.00 7064.50 3106.89 24272.59 00:35:29.876 { 00:35:29.876 "results": [ 00:35:29.876 { 00:35:29.876 "job": "nvme0n1", 00:35:29.876 "core_mask": "0x2", 00:35:29.876 "workload": "randread", 00:35:29.877 "status": "finished", 00:35:29.877 "queue_depth": 128, 00:35:29.877 "io_size": 4096, 00:35:29.877 "runtime": 2.005136, 00:35:29.877 "iops": 18101.016589398423, 00:35:29.877 "mibps": 70.70709605233759, 00:35:29.877 "io_failed": 0, 00:35:29.877 "io_timeout": 0, 00:35:29.877 "avg_latency_us": 7064.504134576236, 00:35:29.877 "min_latency_us": 3106.8918518518517, 00:35:29.877 "max_latency_us": 24272.59259259259 00:35:29.877 } 00:35:29.877 ], 00:35:29.877 "core_count": 1 00:35:29.877 } 00:35:29.877 01:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:29.877 01:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:29.877 01:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:29.877 | .driver_specific 00:35:29.877 | .nvme_error 00:35:29.877 | .status_code 00:35:29.877 | .command_transient_transport_error' 00:35:29.877 01:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 406388 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 406388 ']' 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 406388 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406388 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406388' 00:35:30.135 killing process with pid 406388 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 406388 00:35:30.135 Received shutdown signal, test time was about 2.000000 seconds 00:35:30.135 00:35:30.135 Latency(us) 00:35:30.135 [2024-12-07T00:02:46.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.135 [2024-12-07T00:02:46.286Z] =================================================================================================================== 00:35:30.135 [2024-12-07T00:02:46.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.135 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 406388 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=406799 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 406799 /var/tmp/bperf.sock 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 406799 ']' 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:30.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.394 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.394 [2024-12-07 01:02:46.350308] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:30.394 [2024-12-07 01:02:46.350403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406799 ] 00:35:30.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.394 Zero copy mechanism will not be used. 00:35:30.394 [2024-12-07 01:02:46.418124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.394 [2024-12-07 01:02:46.465267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.653 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.653 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:30.653 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:30.653 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.911 01:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.170 nvme0n1 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:31.170 01:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:31.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:31.433 Zero copy mechanism will not be used. 00:35:31.433 Running I/O for 2 seconds... 00:35:31.433 [2024-12-07 01:02:47.337100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.337159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.337195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.342462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.342500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.342547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.346901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.346933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.346951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.349837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.349869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.349887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.354089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.354121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.354139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.357687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.357719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.357737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.362099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.362132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.362150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.367138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.367171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.367190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.372363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.372412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.372430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.377758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.377805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.377823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.382482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.382535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.387350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.387396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.387414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.392078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.392125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.392143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.397034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.397066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.397083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.401714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.401784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.407448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.407511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.412060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.412091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.412108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.416656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.416687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.416719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.422182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.422214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.422231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.429178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.429210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.429228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.436225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.436272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.436291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.442288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.442320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.442338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.447486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.447518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.447537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.451086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.451120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.451138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.456666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.456711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.433 [2024-12-07 01:02:47.456729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.433 [2024-12-07 01:02:47.461716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.433 [2024-12-07 01:02:47.461749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.461768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.466689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.466721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.466739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.471558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.471591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.471630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.476157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.476190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.476208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.479644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.479691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.485387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.485418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.491060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.491093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.491111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.497199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.497231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.497249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.502478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.502526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.502544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.507460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.507504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.507522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.512151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.512182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.512200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.516753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.516790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.516809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.520074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.520105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.523831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.523861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.523878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.528371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.528401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.533140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.533171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.537717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.537748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.537765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.542359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.542405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.542421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.547067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.547098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.547115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.551656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.551687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.551719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.556462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.556508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.556525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.561296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.561326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.561343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.566681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.566714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.566732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.571438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.571485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.571503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.434 [2024-12-07 01:02:47.576160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.434 [2024-12-07 01:02:47.576191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.434 [2024-12-07 01:02:47.576208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.695 [2024-12-07 01:02:47.580817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.580862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.580879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.585315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.585361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.585377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.589964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.590036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.594456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.594488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.594511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.598847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.598878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.598895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.603448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.603508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.608222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.608253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.608270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.612732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.612793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.617324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.617368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.617385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.622489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.622520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.622536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.627660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.627691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.627709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.632304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.632334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.632351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.637419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.637466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.643474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.643520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.643537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.650562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.650596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.650613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.658687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.658738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.658756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.666448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.666480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.666499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.673317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.673348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.673379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.678488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.678521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.678538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.681967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.682007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.687302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.687335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.687374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.693412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.693442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.693472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.699262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.699295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.699327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.705398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.705429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.705461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.711320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.711353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.711386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.696 [2024-12-07 01:02:47.717080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.696 [2024-12-07 01:02:47.717112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.696 [2024-12-07 01:02:47.717130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.723022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.723086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.728641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.728672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.728704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.734870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.734915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.734932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.740869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.740920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.740938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.747036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.747071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.747103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.753077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.753126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.759538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.759568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.759600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.765822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.765869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.765886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.771482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.771528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.771546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.777366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.777396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.777413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.783110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.783176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.788962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.789003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.789025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.793588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.793619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.793654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.798277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.798322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.798339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.802932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.802963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.802981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.807663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.807734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.812408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.812452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.812469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.817123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.817154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.817172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.821703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.821751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.826323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.826353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.826385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.831061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.831092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.835870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.835901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.835918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.697 [2024-12-07 01:02:47.841628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.697 [2024-12-07 01:02:47.841673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.697 [2024-12-07 01:02:47.841690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.846407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.846439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.846456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.851622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.851672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.856818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.856850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.856867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.861619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.861666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.861685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.866751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.866782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.866814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.872531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.872563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.872581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.877855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.877892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.877910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.883393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.883439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.883457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.888786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.888818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.888849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.894810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.894842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.894861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.900591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.900637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.900654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.906623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.906669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.906686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.960 [2024-12-07 01:02:47.911900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.960 [2024-12-07 01:02:47.911932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.960 [2024-12-07 01:02:47.911950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.917714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.917748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.917765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.923315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.923347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.923365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.928787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.928819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.928837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.934791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.934824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.940077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.940118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.945023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.945064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.945098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.949619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.949668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.954843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.954875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.954893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.961426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.961459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.961477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.969126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.969177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.974893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.974931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.974950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.982416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.982465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.989321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.989352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.989371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.994428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.994460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.994478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:47.999001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:47.999032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:47.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.003753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.003785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.003803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.008422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.008453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.013180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.017916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.017947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.017965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.022834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.022865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.022883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.027779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.027811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.027828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.033242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.033274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.033291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.040005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.040038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.040058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.047340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.047373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.047392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.054728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.054761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.054780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.062643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.062690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.961 [2024-12-07 01:02:48.062708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.961 [2024-12-07 01:02:48.069721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.961 [2024-12-07 01:02:48.069755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.069773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.075570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.075603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.075627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.080933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.080966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.080983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.086881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.086914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.086933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.093069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.093120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.099023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.099055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.099074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:31.962 [2024-12-07 01:02:48.104629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:31.962 [2024-12-07 01:02:48.104660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.962 [2024-12-07 01:02:48.104692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.224 [2024-12-07 01:02:48.110710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.224 [2024-12-07 01:02:48.110744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.224 [2024-12-07 01:02:48.110762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.224 [2024-12-07 01:02:48.116664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.224 [2024-12-07 01:02:48.116711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.224 [2024-12-07 01:02:48.116728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.224 [2024-12-07 01:02:48.122403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.224 [2024-12-07 01:02:48.122434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.224 [2024-12-07 01:02:48.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.224 [2024-12-07 01:02:48.127831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.224 [2024-12-07 01:02:48.127885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.127904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.133341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.133373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.133390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.139589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.139620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.139637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.146001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.146034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.146052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.151724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.151757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.151790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.157389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.157422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.157441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.163471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.163519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.163537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.169502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.169534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.169552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.176072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.176106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.176124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.181864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.181895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.181912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.188132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.188183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.194342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.194375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.194394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.199725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.199758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.199775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.205181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.205214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.205232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.211546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.211578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.211596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.219307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.219341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.219359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.226181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.226216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.234313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.234372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.242327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.242361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.242379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.250464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.250497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.250515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.256530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.256563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.256581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.260937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.260968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.260986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.265384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.265415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.265433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.269845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.269876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.269893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.274974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.275013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.275033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.281379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.281411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.225 [2024-12-07 01:02:48.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.225 [2024-12-07 01:02:48.288929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.225 [2024-12-07 01:02:48.288961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.288979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.296459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.296492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.296510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.304100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.304133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.311628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.311660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.311678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.319135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.319167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.319185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.326685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.326719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.326737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.226 5580.00 IOPS, 697.50 MiB/s [2024-12-07T00:02:48.377Z] [2024-12-07 01:02:48.335569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.335602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.335621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.339741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.339773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.339791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.347433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.347466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.355077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.355111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.355130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.362657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.362690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.362708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.226 [2024-12-07 01:02:48.370245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.226 [2024-12-07 01:02:48.370293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.226 [2024-12-07 01:02:48.370311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.377933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.377965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.377983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.385438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.385469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.385486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.393102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.393134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.393152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.399175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.399222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.399239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.405211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.405245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.405263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.411511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.411551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.411570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.418068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.418100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.418119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.425680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.425712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.425744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.433571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.433604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.441044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.441076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.441095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.446952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.446986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.447017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.451920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.451952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.451969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.456411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.456443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.456459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.461047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.461080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.466286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.466318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.471881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.471913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.471931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.478257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.478304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.478322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.486387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.486435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.486454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.493319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.493366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.493384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.499325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.499374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.499392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.505834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.505881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.505899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.512418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.512451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.512470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.518496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.518529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.518555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.524362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.489 [2024-12-07 01:02:48.524395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.489 [2024-12-07 01:02:48.524413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.489 [2024-12-07 01:02:48.529662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.529695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.529713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.535841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.535874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.535892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.542700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.542733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.549591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.549624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.549642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.555239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.555272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.555291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.558594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.558626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.564830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.564861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.564879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.571104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.571144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.571163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.577730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.577762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.577780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.583878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.583925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.583942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.589760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.589793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.589810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.595830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.595862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.595879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.601338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.601370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.601388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.605865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.605897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.605915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.610412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.610444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.610462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.615655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.615686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.615704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.622512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.622562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.629540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.629571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.629589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.490 [2024-12-07 01:02:48.635265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.490 [2024-12-07 01:02:48.635313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.490 [2024-12-07 01:02:48.635330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.640908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.640956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.640973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.645937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.645969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.645987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.651271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.651304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.651323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.655889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.655920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.655953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.661026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.661058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.661076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.666591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.666629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.666648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.672638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.672687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.672705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.678153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.678186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.678205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.683527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.683559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.683577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.689432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.689483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.695735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.695768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.695785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.701745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.701811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.708016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.708049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.708067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.751 [2024-12-07 01:02:48.713278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.751 [2024-12-07 01:02:48.713326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.751 [2024-12-07 01:02:48.713343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.718476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.718523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.718542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.723752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.723785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.723802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.729937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.729969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.729987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.734971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.735010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.739593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.739624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.739642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.744120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.744150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.744167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.749564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.749596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.749614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.754282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.754314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.754331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.758949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.758980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.759025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.764147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.764180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.764197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.768973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.769012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.769032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.774480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.774512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.774531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.780173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.780205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.780223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.783655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.783704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.787710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.787741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.787759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.792146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.792177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.792195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.797281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.797327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.797345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.802440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.802497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.802515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.807336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.807367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.807385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.811937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.811968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.752 [2024-12-07 01:02:48.811985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.752 [2024-12-07 01:02:48.816411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.752 [2024-12-07 01:02:48.816443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.816460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.821817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.821881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.828629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.828661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.828678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.835777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.835808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.835826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.841304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.841335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.841353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.847006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.847038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.847055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.852759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.852792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.852810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.858414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.858446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.858464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.862593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.862625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.862643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.869956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.870013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.870033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.877179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.877212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.877230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.883179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.883211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.883229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.889497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.889529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.889546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.753 [2024-12-07 01:02:48.895463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:32.753 [2024-12-07 01:02:48.895512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.753 [2024-12-07 01:02:48.895530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.015 [2024-12-07 01:02:48.900126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.015 [2024-12-07 01:02:48.900160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.015 [2024-12-07 01:02:48.900186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.015 [2024-12-07 01:02:48.905269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.015 [2024-12-07 01:02:48.905302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.015 [2024-12-07 01:02:48.905334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.015 [2024-12-07 01:02:48.910156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.910188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.910206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.915385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.915417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.915435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.921583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.921615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.921633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.927171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.927203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.927221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.932740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.932788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.932806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.938010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.938042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.938060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.943252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.943285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.947580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.947619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.951859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.951910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.957035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.957067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.957100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.961781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.961812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.961830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.966647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.966680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.966698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.972335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.972371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.972390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.978242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.978284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.978318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.983704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.983736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.983754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.987202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.987233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.987250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.991014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.991045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:48.995873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:48.995905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:48.995922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.000818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.000864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.000882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.005332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.005363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.005381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.010046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.010076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.015246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.015288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.015305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.020248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.020303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.020321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.026193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.026229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.026246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.031418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.031471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.031488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.036199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.036230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.016 [2024-12-07 01:02:49.036248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.016 [2024-12-07 01:02:49.040898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.016 [2024-12-07 01:02:49.040929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.040947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.045482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.045526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.045544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.049982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.050024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.050058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.054668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.054697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.059135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.059165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.059183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.063621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.063651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.063668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.068194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.068224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.068241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.072703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.072733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.072750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.077061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.077090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.077108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.081554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.081584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.081602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.086122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.086153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.086171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.091372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.091418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.091435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.096244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.096275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.096292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.100575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.100605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.100622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.105063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.105093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.109596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.109626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.109651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.114269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.114298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.114315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.118646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.118692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.118709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.123131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.123161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.123178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.127656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.127699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.127716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.132213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.132243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.132261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.136585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.136629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.136645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.141225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.141255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.141272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.145765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.145811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.150576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.150614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.150633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.155392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.155423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.155440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.017 [2024-12-07 01:02:49.160642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.017 [2024-12-07 01:02:49.160673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.017 [2024-12-07 01:02:49.160690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.277 [2024-12-07 01:02:49.166716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.277 [2024-12-07 01:02:49.166749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.277 [2024-12-07 01:02:49.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.277 [2024-12-07 01:02:49.171271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.277 [2024-12-07 01:02:49.171301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.277 [2024-12-07 01:02:49.171318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.277 [2024-12-07 01:02:49.176029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.277 [2024-12-07 01:02:49.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.277 [2024-12-07 01:02:49.176080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.277 [2024-12-07 01:02:49.180703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.277 [2024-12-07 01:02:49.180734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.277 [2024-12-07 01:02:49.180752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.185473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.185505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.185522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.190978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.191016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.191056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.196657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.196689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.196707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.200241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.200272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.204292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.204355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.209341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.209372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.209405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.215127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.215158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.222859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.222906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.222923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.229246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.229277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.229295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.234964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.235003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.235039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.240406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.240436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.240476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.245799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.245845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.251058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.251102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.251118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.255605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.255635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.255653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.260149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.260178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.260195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.264584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.264613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.264630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.269054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.269083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.269100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.273574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.273602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.278160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.278190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.278206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.283199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.283251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.283268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.288298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.288329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.288346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.292979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.293032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.293051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.298126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.298158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.298175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.304120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.304150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.304167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.311705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.311735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.311752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.317520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.317567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.278 [2024-12-07 01:02:49.317584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.278 [2024-12-07 01:02:49.323089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.278 [2024-12-07 01:02:49.323120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.279 [2024-12-07 01:02:49.323138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.279 [2024-12-07 01:02:49.328166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.279 [2024-12-07 01:02:49.328197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.279 [2024-12-07 01:02:49.328214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.279 [2024-12-07 01:02:49.332785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbede0) 00:35:33.279 [2024-12-07 01:02:49.332815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.279 [2024-12-07 01:02:49.332832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.279 5644.00 IOPS, 705.50 MiB/s 00:35:33.279 Latency(us) 00:35:33.279 [2024-12-07T00:02:49.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.279 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:33.279 nvme0n1 : 2.00 5643.82 705.48 0.00 0.00 2830.75 719.08 11456.66 00:35:33.279 [2024-12-07T00:02:49.430Z] =================================================================================================================== 00:35:33.279 [2024-12-07T00:02:49.430Z] Total : 5643.82 705.48 0.00 0.00 2830.75 719.08 11456.66 00:35:33.279 { 00:35:33.279 "results": [ 00:35:33.279 { 00:35:33.279 "job": "nvme0n1", 00:35:33.279 "core_mask": "0x2", 00:35:33.279 "workload": "randread", 00:35:33.279 "status": "finished", 00:35:33.279 "queue_depth": 16, 00:35:33.279 "io_size": 131072, 00:35:33.279 "runtime": 2.002897, 00:35:33.279 "iops": 5643.824919603953, 00:35:33.279 "mibps": 705.4781149504942, 00:35:33.279 "io_failed": 0, 00:35:33.279 "io_timeout": 0, 00:35:33.279 "avg_latency_us": 2830.7516552646066, 00:35:33.279 "min_latency_us": 719.0755555555555, 00:35:33.279 "max_latency_us": 11456.663703703704 00:35:33.279 } 00:35:33.279 ], 00:35:33.279 "core_count": 1 00:35:33.279 } 00:35:33.279 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:33.279 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:33.279 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:33.279 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:33.279 | .driver_specific 00:35:33.279 | .nvme_error 00:35:33.279 | .status_code 00:35:33.279 | .command_transient_transport_error' 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 406799 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 406799 ']' 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 406799 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406799 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406799' 00:35:33.539 killing process with pid 406799 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 406799 00:35:33.539 Received shutdown signal, test time was about 2.000000 seconds 00:35:33.539 00:35:33.539 Latency(us) 00:35:33.539 [2024-12-07T00:02:49.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.539 [2024-12-07T00:02:49.690Z] =================================================================================================================== 00:35:33.539 [2024-12-07T00:02:49.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.539 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 406799 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=407200 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 407200 /var/tmp/bperf.sock 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 407200 ']' 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:33.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.797 01:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.797 [2024-12-07 01:02:49.914025] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:33.797 [2024-12-07 01:02:49.914108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407200 ] 00:35:34.055 [2024-12-07 01:02:49.981143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.055 [2024-12-07 01:02:50.031487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.055 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.055 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:34.056 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:34.056 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.314 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.881 nvme0n1 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:34.881 01:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:34.881 Running I/O for 2 seconds... 00:35:34.882 [2024-12-07 01:02:50.973810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef0ff8 00:35:34.882 [2024-12-07 01:02:50.975168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.882 [2024-12-07 01:02:50.975209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:34.882 [2024-12-07 01:02:50.988298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016edf988 00:35:34.882 [2024-12-07 01:02:50.990129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.882 [2024-12-07 01:02:50.990174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:34.882 [2024-12-07 01:02:50.996690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee4de8 00:35:34.882 [2024-12-07 01:02:50.997656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.882 [2024-12-07 01:02:50.997684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:34.882 [2024-12-07 01:02:51.009281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef46d0 00:35:34.882 [2024-12-07 01:02:51.010488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.882 [2024-12-07 01:02:51.010533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:34.882 [2024-12-07 01:02:51.021259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee1f80 00:35:34.882 [2024-12-07 01:02:51.021931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:34.882 [2024-12-07 01:02:51.021976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:35.141 [2024-12-07 01:02:51.035589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efe2e8 00:35:35.142 [2024-12-07 01:02:51.037394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.037439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.044094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef5be8 00:35:35.142 [2024-12-07 01:02:51.044919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.044963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.058184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016edece0 00:35:35.142 [2024-12-07 01:02:51.059678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.069867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eea248 00:35:35.142 [2024-12-07 01:02:51.070953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.071005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.081025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efac10 00:35:35.142 [2024-12-07 01:02:51.081959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.081989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.092161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eefae0 00:35:35.142 [2024-12-07 01:02:51.092911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.092940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.103963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee3498 00:35:35.142 [2024-12-07 01:02:51.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.116489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef2d80 00:35:35.142 [2024-12-07 01:02:51.117880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.117924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.128430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef7100 00:35:35.142 [2024-12-07 01:02:51.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.129864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.139401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efef90 00:35:35.142 [2024-12-07 01:02:51.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.140659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.151392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efd208 00:35:35.142 [2024-12-07 01:02:51.152388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.152422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.162675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee7818 00:35:35.142 [2024-12-07 01:02:51.163510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.163551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.174023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef9f68 00:35:35.142 [2024-12-07 01:02:51.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.188557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.142 [2024-12-07 01:02:51.190369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.190398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.196781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef1430 00:35:35.142 [2024-12-07 01:02:51.197656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.197708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.210452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef96f8 00:35:35.142 [2024-12-07 01:02:51.211765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.211810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.222390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef46d0 00:35:35.142 [2024-12-07 01:02:51.223449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.223493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.233763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef81e0 00:35:35.142 [2024-12-07 01:02:51.235173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.235203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.245384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee88f8 00:35:35.142 [2024-12-07 01:02:51.246650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.246692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.256561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ede470 00:35:35.142 [2024-12-07 01:02:51.257485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.257528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.267224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efbcf0 00:35:35.142 [2024-12-07 01:02:51.268097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.268140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:35.142 [2024-12-07 01:02:51.281755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee3d08 00:35:35.142 [2024-12-07 01:02:51.283128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.142 [2024-12-07 01:02:51.283173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:35.401 [2024-12-07 01:02:51.293531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee4140 00:35:35.401 [2024-12-07 01:02:51.295107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.295136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.304516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef0ff8 00:35:35.402 [2024-12-07 01:02:51.305884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.305914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.316086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eef6a8 00:35:35.402 [2024-12-07 01:02:51.317334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.317362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.327985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efe720 00:35:35.402 [2024-12-07 01:02:51.328783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.328811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.338869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef2d80 00:35:35.402 [2024-12-07 01:02:51.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.339591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.350584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee5a90 00:35:35.402 [2024-12-07 01:02:51.351716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.351758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.362474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.402 [2024-12-07 01:02:51.363126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.363156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.376920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef5be8 00:35:35.402 [2024-12-07 01:02:51.378843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.378887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.385319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee5658 00:35:35.402 [2024-12-07 01:02:51.386551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.386592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.397482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee1710 00:35:35.402 [2024-12-07 01:02:51.398538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.398582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.409095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eef6a8 00:35:35.402 [2024-12-07 01:02:51.410122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.410151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.421431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efb480 00:35:35.402 [2024-12-07 01:02:51.422567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.422611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.434631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efa3a0 00:35:35.402 [2024-12-07 01:02:51.436038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.436084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.446690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efdeb0 00:35:35.402 [2024-12-07 01:02:51.448164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.448193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.458265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efe2e8 00:35:35.402 [2024-12-07 01:02:51.459851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.468838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef5be8 00:35:35.402 [2024-12-07 01:02:51.470524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.470553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.480800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016edece0 00:35:35.402 [2024-12-07 01:02:51.482226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.482255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.492559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef9b30 00:35:35.402 [2024-12-07 01:02:51.493741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.505077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef8e88 00:35:35.402 [2024-12-07 01:02:51.506399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.506443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.516262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef5378 00:35:35.402 [2024-12-07 01:02:51.517389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.517431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.527905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee1f80 00:35:35.402 [2024-12-07 01:02:51.529163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.529206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:35.402 [2024-12-07 01:02:51.542109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef0350 00:35:35.402 [2024-12-07 01:02:51.544053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.402 [2024-12-07 01:02:51.544097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.550664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee0ea0 00:35:35.664 [2024-12-07 01:02:51.551675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.551705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.564721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eea680 00:35:35.664 [2024-12-07 01:02:51.566356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.566386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.575748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef8618 00:35:35.664 [2024-12-07 01:02:51.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.577281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.587403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef31b8 00:35:35.664 [2024-12-07 01:02:51.588609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.588652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.598686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef1430 00:35:35.664 [2024-12-07 01:02:51.599849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.599892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.610909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef20d8 00:35:35.664 [2024-12-07 01:02:51.612294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.612324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.621801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016edf118 00:35:35.664 [2024-12-07 01:02:51.622903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.622932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.633251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef20d8 00:35:35.664 [2024-12-07 01:02:51.634127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.645565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee0ea0 00:35:35.664 [2024-12-07 01:02:51.646618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.646661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.657817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eeb760 00:35:35.664 [2024-12-07 01:02:51.658986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.659036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.669406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee9e10 00:35:35.664 [2024-12-07 01:02:51.670767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.670811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.681345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efda78 00:35:35.664 [2024-12-07 01:02:51.682634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.682677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:35.664 [2024-12-07 01:02:51.692843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efe2e8 00:35:35.664 [2024-12-07 01:02:51.693800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.664 [2024-12-07 01:02:51.693843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.703923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee5ec8 00:35:35.665 [2024-12-07 01:02:51.704754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.704797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.714936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef35f0 00:35:35.665 [2024-12-07 01:02:51.715580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.715609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.727044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef1430 00:35:35.665 [2024-12-07 01:02:51.727793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.727820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.739198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef7970 00:35:35.665 [2024-12-07 01:02:51.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.740187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.750349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee6300 00:35:35.665 [2024-12-07 01:02:51.752148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.752179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.760240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eec840 00:35:35.665 [2024-12-07 01:02:51.761085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.761118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.774740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee88f8 00:35:35.665 [2024-12-07 01:02:51.775956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.776006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.786121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef7538 00:35:35.665 [2024-12-07 01:02:51.787505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.787547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.797106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eef6a8 00:35:35.665 [2024-12-07 01:02:51.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.798273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:35.665 [2024-12-07 01:02:51.808593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eddc00 00:35:35.665 [2024-12-07 01:02:51.809770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.665 [2024-12-07 01:02:51.809813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.820514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eeaab8 00:35:35.927 [2024-12-07 01:02:51.821741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.821770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.832079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee84c0 00:35:35.927 [2024-12-07 01:02:51.833015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.833059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.843580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee88f8 00:35:35.927 [2024-12-07 01:02:51.844706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.844750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.855190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eef270 00:35:35.927 [2024-12-07 01:02:51.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.855877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.869262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eebfd0 00:35:35.927 [2024-12-07 01:02:51.870895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.870937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.878446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eeee38 00:35:35.927 [2024-12-07 01:02:51.879580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.879623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.892986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016efac10 00:35:35.927 [2024-12-07 01:02:51.894698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.894743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.904862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016eeb760 00:35:35.927 [2024-12-07 01:02:51.906606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.906648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.916113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ee88f8 00:35:35.927 [2024-12-07 01:02:51.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.917872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.927177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.927 [2024-12-07 01:02:51.927422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.940900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.927 [2024-12-07 01:02:51.941193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.941222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.927 [2024-12-07 01:02:51.954527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.927 [2024-12-07 01:02:51.954815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.954843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.927 21510.00 IOPS, 84.02 MiB/s [2024-12-07T00:02:52.078Z] [2024-12-07 01:02:51.968254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.927 [2024-12-07 01:02:51.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.927 [2024-12-07 01:02:51.968531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:51.982123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:51.982418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:51.982459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:51.995799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:51.996123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:51.996152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:52.009591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:52.009845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:52.009874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:52.022872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:52.023138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:52.023167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:52.036421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:52.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:52.036797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:52.049915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:52.050278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:52.050319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:35.928 [2024-12-07 01:02:52.063674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:35.928 [2024-12-07 01:02:52.063930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.928 [2024-12-07 01:02:52.063973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.188 [2024-12-07 01:02:52.077236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.188 [2024-12-07 01:02:52.077541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.188 [2024-12-07 01:02:52.077569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.188 [2024-12-07 01:02:52.091053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.188 [2024-12-07 01:02:52.091335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.188 [2024-12-07 01:02:52.091384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.188 [2024-12-07 01:02:52.104810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.188 [2024-12-07 01:02:52.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.105106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.118664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.119029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.119058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.132491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.132822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.132851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.146192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.146499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.146543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.160027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.160283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.160311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.173713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.174047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.174090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.187459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.187764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.187791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.201266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.201572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.201599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.214960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.215361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.228775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.229076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.229119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.242671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.243017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.243046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.256528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.256835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.256880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.270423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.270692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.270720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.283899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.284228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.297554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.297832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.297860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.311237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.311553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.311581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.189 [2024-12-07 01:02:52.325170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.189 [2024-12-07 01:02:52.325464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.189 [2024-12-07 01:02:52.325492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.338987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.339293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.339322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.352797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.353139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.353168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.366427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.366675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.366704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.380418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.380693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.380722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.394109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.394348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.408108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.408372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.408414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.421905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.422245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.422274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.435705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.436034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.436080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.449456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.449792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.449826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.463297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.449 [2024-12-07 01:02:52.463587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.449 [2024-12-07 01:02:52.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.449 [2024-12-07 01:02:52.477062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.477388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.477431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.490584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.490837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.490866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.504160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.504459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.504502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.517950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.518224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.518267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.531879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.532189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.532218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.545381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.545699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.545742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.559181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.559389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.559418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.572803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.573130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.450 [2024-12-07 01:02:52.586608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.450 [2024-12-07 01:02:52.586927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.450 [2024-12-07 01:02:52.586955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.600519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.600795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.600823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.614354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.614643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.614671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.628110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.628362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.641498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.641705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.641734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.655037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.655272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.655300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.668533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.668832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.668860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.682166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.682406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.682433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.695790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.696050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.696078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.709315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.709609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.709636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.723090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.723394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.736550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.736852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.736880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.750127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.750407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.750435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.763740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.764069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.764098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.777320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.777559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.777587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.790822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.791066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.791094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.804101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.804338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.804385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.817739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.817976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.818011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.831398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.709 [2024-12-07 01:02:52.831674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.709 [2024-12-07 01:02:52.831716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.709 [2024-12-07 01:02:52.845182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.710 [2024-12-07 01:02:52.845478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.710 [2024-12-07 01:02:52.845506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.968 [2024-12-07 01:02:52.858720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.968 [2024-12-07 01:02:52.859001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.968 [2024-12-07 01:02:52.859030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.968 [2024-12-07 01:02:52.872584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.968 [2024-12-07 01:02:52.872917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.968 [2024-12-07 01:02:52.872961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.968 [2024-12-07 01:02:52.886290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.968 [2024-12-07 01:02:52.886583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.968 [2024-12-07 01:02:52.886611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.968 [2024-12-07 01:02:52.900044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.968 [2024-12-07 01:02:52.900322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.900351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 [2024-12-07 01:02:52.913870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.969 [2024-12-07 01:02:52.914210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.914253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 [2024-12-07 01:02:52.927726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.969 [2024-12-07 01:02:52.928081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.928111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 [2024-12-07 01:02:52.941465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.969 [2024-12-07 01:02:52.941759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.941802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 [2024-12-07 01:02:52.955336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.969 [2024-12-07 01:02:52.955614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.955656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 20078.50 IOPS, 78.43 MiB/s [2024-12-07T00:02:53.120Z] [2024-12-07 01:02:52.968911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230ee70) with pdu=0x200016ef6890 00:35:36.969 [2024-12-07 01:02:52.969185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:36.969 [2024-12-07 01:02:52.969229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:36.969 00:35:36.969 Latency(us) 00:35:36.969 [2024-12-07T00:02:53.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:36.969 nvme0n1 : 2.01 20072.43 78.41 0.00 0.00 6361.79 2512.21 14563.56 00:35:36.969 [2024-12-07T00:02:53.120Z] =================================================================================================================== 00:35:36.969 [2024-12-07T00:02:53.120Z] Total : 20072.43 78.41 0.00 0.00 6361.79 2512.21 14563.56 00:35:36.969 { 00:35:36.969 "results": [ 00:35:36.969 { 00:35:36.969 "job": "nvme0n1", 00:35:36.969 "core_mask": "0x2", 00:35:36.969 "workload": "randwrite", 00:35:36.969 "status": "finished", 00:35:36.969 "queue_depth": 128, 00:35:36.969 "io_size": 4096, 00:35:36.969 "runtime": 2.006583, 00:35:36.969 "iops": 20072.43159141685, 00:35:36.969 "mibps": 78.40793590397207, 00:35:36.969 "io_failed": 0, 00:35:36.969 "io_timeout": 0, 00:35:36.969 "avg_latency_us": 6361.793817517396, 00:35:36.969 "min_latency_us": 2512.213333333333, 00:35:36.969 "max_latency_us": 14563.555555555555 00:35:36.969 } 00:35:36.969 ], 00:35:36.969 "core_count": 1 00:35:36.969 } 00:35:36.969 01:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:36.969 01:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:36.969 01:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:36.969 01:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:36.969 | .driver_specific 00:35:36.969 | .nvme_error 00:35:36.969 | .status_code 00:35:36.969 | .command_transient_transport_error' 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 407200 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 407200 ']' 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 407200 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407200 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407200' 00:35:37.228 killing process with pid 407200 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 407200 00:35:37.228 Received shutdown signal, test time was about 2.000000 seconds 00:35:37.228 00:35:37.228 Latency(us) 00:35:37.228 [2024-12-07T00:02:53.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.228 [2024-12-07T00:02:53.379Z] =================================================================================================================== 00:35:37.228 [2024-12-07T00:02:53.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:37.228 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 407200 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=407661 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 407661 /var/tmp/bperf.sock 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 407661 ']' 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:37.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.487 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:37.487 [2024-12-07 01:02:53.519119] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:37.488 [2024-12-07 01:02:53.519207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407661 ] 00:35:37.488 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:37.488 Zero copy mechanism will not be used. 00:35:37.488 [2024-12-07 01:02:53.592897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.746 [2024-12-07 01:02:53.641694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.746 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.746 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:37.746 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:37.746 01:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.003 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.263 nvme0n1 00:35:38.263 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:38.263 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.263 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:38.521 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.521 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:38.521 01:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:38.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:38.521 Zero copy mechanism will not be used. 00:35:38.521 Running I/O for 2 seconds... 00:35:38.521 [2024-12-07 01:02:54.523747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.523875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.523917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.529080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.529170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.529200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.534725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.534806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.534834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.540319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.540393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.545836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.545913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.545941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.551371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.551442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.551470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.556426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.556496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.556523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.561957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.562035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.562063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.567016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.567115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.567144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.572018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.572113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.572143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.577048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.577127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.577156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.582105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.582195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.587310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.587415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.587449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.593802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.594016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.521 [2024-12-07 01:02:54.594046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.521 [2024-12-07 01:02:54.599962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.521 [2024-12-07 01:02:54.600150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.600180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.606206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.606327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.606357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.612509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.612707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.612736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.618776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.618991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.625037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.625223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.625252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.631312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.631514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.631542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.637626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.637833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.637863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.643851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.644024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.644054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.650143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.650353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.650382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.656375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.656562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.656590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.662966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.663190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.663219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.522 [2024-12-07 01:02:54.669048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.522 [2024-12-07 01:02:54.669178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.522 [2024-12-07 01:02:54.669207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.673970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.674073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.674103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.679215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.679312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.679341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.685597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.685705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.685735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.691964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.692045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.692075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.698523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.698712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.698740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.705669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.705852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.705881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.711116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.711210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.711239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.716727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.716859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.716888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.721625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.721726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.721755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.726885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.727074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.727103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.733169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.733363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.733392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.738739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.738833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.738861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.744958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.745142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.745178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.751643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.751846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.751876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.757787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.757968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.758004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.764026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.764203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.764232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.769223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.769330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.769359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.774108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.781 [2024-12-07 01:02:54.774216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.781 [2024-12-07 01:02:54.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.781 [2024-12-07 01:02:54.778891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.778975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.779009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.784109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.784184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.784213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.790386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.790455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.790481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.797127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.797333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.797363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.803035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.803171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.808652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.813691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.813804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.813833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.819696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.819880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.819909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.826020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.826133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.826162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.832680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.832810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.832838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.839418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.839583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.839611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.845867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.846038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.846068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.852361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.852451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.852479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.858865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.859055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.865542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.865730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.865759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.871943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.872171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.872200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.878592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.878810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.885246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.885400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.885429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.892123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.892289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.898362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.898556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.898585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.903973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.904150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.908877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.909006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.909037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.914472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.914619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.914648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.919836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.919979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.920015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.782 [2024-12-07 01:02:54.925410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:38.782 [2024-12-07 01:02:54.925509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.782 [2024-12-07 01:02:54.925538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.931633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.931718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.931745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.937221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.937300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.937328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.942944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.943019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.943046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.948395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.948490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.953630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.953708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.953735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.959023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.959094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.959121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.964374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.964450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.964477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.969414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.969529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.969557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.974363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.974458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.974486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.979379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.979460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.979487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.984428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.984507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.984534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.989499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.989583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.989610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.994473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.994564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.994595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:54.999446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:54.999543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:54.999570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.004310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.004419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.004445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.009811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.009976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.010012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.016105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.016251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.016279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.021019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.021116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.021144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.025939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.026055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.026083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.030744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.030870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.030899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.035846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.035943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.035975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.040890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.041011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.041047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.045798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.045886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.045913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.051075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.051206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.051234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.056511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.056606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.061561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.061649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.061676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.066615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.066702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.066740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.071592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.071674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.071700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.076504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.076593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.076620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.081544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.081634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.081661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.086485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.042 [2024-12-07 01:02:55.086605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.042 [2024-12-07 01:02:55.091513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.042 [2024-12-07 01:02:55.091590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.096422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.096504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.096531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.101368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.101445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.101472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.106286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.106371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.106398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.111169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.111249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.111278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.116275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.116353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.116380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.121213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.121304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.121332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.126121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.126213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.126241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.131016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.131114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.131143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.136052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.136138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.140979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.141080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.141109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.145913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.145986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.146019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.150782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.150896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.156143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.156236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.156265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.161318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.161460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.161489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.166371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.166454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.166481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.171719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.171789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.171822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.177119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.177191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.177218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.182685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.182782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.182810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.043 [2024-12-07 01:02:55.188172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.043 [2024-12-07 01:02:55.188242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.043 [2024-12-07 01:02:55.188269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.193610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.193687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.193714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.198838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.198922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.198949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.204313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.204445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.204474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.210058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.210168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.215569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.215642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.215669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.220839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.220915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.220942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.226187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.226276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.226305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.231700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.231798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.236923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.237025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.242234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.242307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.242334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.247876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.247982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.253316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.253389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.253417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.258647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.258719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.258747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.264306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.264392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.264419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.269933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.270025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.270053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.275471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.275544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.275570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.280877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.280974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.281008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.285746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.285825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.285852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.290562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.290653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.290681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.295430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.295506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.295532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.300364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.300435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.300462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.305883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.306040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.306067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.311012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.311084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.311120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.316074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.316160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.316188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.321009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.321093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.321122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.326019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.326103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.326131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.331068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.331159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.331187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.336153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.336231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.336259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.341022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.341113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.341141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.345986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.346071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.346100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.350830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.350920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.350946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.355876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.355973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.356009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.360896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.361012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.361039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.365943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.366028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.302 [2024-12-07 01:02:55.366055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.302 [2024-12-07 01:02:55.370865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.302 [2024-12-07 01:02:55.370945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.370972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.376063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.376134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.376161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.381645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.381716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.381743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.387008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.387078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.387105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.391927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.392001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.392029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.397474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.397577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.402788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.402858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.402885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.408526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.408647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.408674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.415539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.415654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.415681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.422137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.422211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.422243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.428857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.428927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.428955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.435466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.435606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.435636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.442190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.442281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.303 [2024-12-07 01:02:55.447542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.303 [2024-12-07 01:02:55.447628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.303 [2024-12-07 01:02:55.447654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.452757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.452837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.452870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.457919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.458001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.458029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.463006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.463075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.463102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.468278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.468351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.468377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.473705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.473808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.473837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.478892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.479050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.479080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.485237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.485337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.485366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.490578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.490651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.490679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.496335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.496429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.502240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.502332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.502360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.507840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.507950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.512961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.513059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.513088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.517911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.519396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.519426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 5602.00 IOPS, 700.25 MiB/s [2024-12-07T00:02:55.713Z] [2024-12-07 01:02:55.524064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.524178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.524205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.529025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.529212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.529240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.535234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.535421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.540663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.540778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.540807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.545685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.545801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.545829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.550674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.550787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.550815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.555757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.555883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.555911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.560866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.561020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.561048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.565983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.566122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.566151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.571043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.571187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.571216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.576051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.576172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.576200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.562 [2024-12-07 01:02:55.581064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.562 [2024-12-07 01:02:55.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.562 [2024-12-07 01:02:55.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.586235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.586323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.586350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.591192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.591304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.591342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.596261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.596374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.596402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.601327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.601432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.601461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.606465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.606547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.606574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.611536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.611671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.611700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.616604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.616745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.616773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.621716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.621820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.621850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.626878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.626992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.627028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.632002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.632140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.632168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.636898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.637049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.637078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.642524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.642657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.642687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.647463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.647581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.647609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.652923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.653108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.653138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.659191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.659340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.659368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.664448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.664552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.670130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.670217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.670246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.675707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.675780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.675807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.681600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.681777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.681806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.688096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.688308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.688336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.694048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.694204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.694233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.700188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.700316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.700344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.563 [2024-12-07 01:02:55.706104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.563 [2024-12-07 01:02:55.706252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.563 [2024-12-07 01:02:55.706281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.712376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.712550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.712578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.718428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.718497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.718524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.723489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.723577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.723604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.728088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.728157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.728184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.732709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.732840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.732876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.738296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.738486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.738515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.743866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.743956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.744002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.750137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.750285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.750314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.756792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.756884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.762875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.763103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.763132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.769038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.769193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.769222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.774930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.775101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.775130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.781099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.781288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.781317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.786964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.787142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.787170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.792887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.793044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.793074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.799480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.799660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.805636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.805834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.805862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.812082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.812243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.819064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.819275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.819304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.826028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.826228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.826256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.832642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.832838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.832867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.839610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.823 [2024-12-07 01:02:55.839684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.823 [2024-12-07 01:02:55.839711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.823 [2024-12-07 01:02:55.845355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.845508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.845536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.850089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.850162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.854412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.854491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.854517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.858744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.858833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.858859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.863008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.863104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.863133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.867243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.867325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.867353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.871509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.871582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.871609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.875700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.875772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.875799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.880158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.880231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.880265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.884606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.884684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.884711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.889039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.889177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.889205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.893608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.893693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.893720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.898112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.898199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.898227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.902482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.902555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.906923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.907002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.907029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.911479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.911549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.911575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.915826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.915900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.915926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.920114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.920206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.920234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.924417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.924483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.924509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.928787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.928882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.933175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.933261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.933289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.937671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.937797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.937825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.942099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.942172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.942198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.946662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.946732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.946759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.951179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.951279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.955585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.955667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.955693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.959915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.959985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.960020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.964446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.964531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.964557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:39.824 [2024-12-07 01:02:55.969027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:39.824 [2024-12-07 01:02:55.969099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:39.824 [2024-12-07 01:02:55.969125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.973509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.973580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.973607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.977970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.978053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.978080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.982487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.982560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.982588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.986953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.987048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.987075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.991444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.991515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.991542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:55.995624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:55.995690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:55.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.000061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.000132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.000159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.004463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.004550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.004576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.008954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.009039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.009066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.013426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.013493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.013519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.017993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.018070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.018096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.022637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.022730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.022757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.027429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.027498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.027524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.032115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.032213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.032241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.036643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.036747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.036773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.041434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.041548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.041575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.046062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.046168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.046197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.050779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.050846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.050873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.055251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.055321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.055348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.059958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.060094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.060122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.064991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.065071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.065098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.069558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.069628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.069655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.074254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.074328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.074354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.079057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.079135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.085 [2024-12-07 01:02:56.079162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.085 [2024-12-07 01:02:56.083739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.085 [2024-12-07 01:02:56.083809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.083836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.088366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.088433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.088460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.093083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.093152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.093179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.098503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.098578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.098604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.102935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.103067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.103093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.108107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.108244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.113537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.113694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.113722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.119044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.119173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.119208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.124301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.124499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.124526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.129525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.129696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.129724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.134745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.134918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.134945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.140199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.140377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.140405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.145528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.145709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.145736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.150877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.151063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.151092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.156353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.156530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.156572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.161770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.161967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.162020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.167113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.167305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.167332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.172481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.172642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.172670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.177772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.177969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.178018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.183159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.183296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.183324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.188433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.188640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.188687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.193564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.193716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.198116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.198193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.198221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.202335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.202499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.206970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.207067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.207095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.211521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.211648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.211675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.216658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.216746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.216787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.086 [2024-12-07 01:02:56.221759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.086 [2024-12-07 01:02:56.221830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.086 [2024-12-07 01:02:56.221857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.087 [2024-12-07 01:02:56.226152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.087 [2024-12-07 01:02:56.226237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.087 [2024-12-07 01:02:56.226265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.087 [2024-12-07 01:02:56.230367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.087 [2024-12-07 01:02:56.230447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.087 [2024-12-07 01:02:56.230474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.234654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.234726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.234754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.239434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.239622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.239650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.244540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.244712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.250163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.250311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.250345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.255857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.255968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.256004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.260213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.260295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.260323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.264680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.264752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.264780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.268924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.269067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.269095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.273945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.274114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.274141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.279268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.279464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.279519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.285018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.285196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.285224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.290145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.290247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.290274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.294518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.294661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.294689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.298847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.298961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.298988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.303424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.303531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.303559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.308536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.308734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.313752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.313837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.313866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.318347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.318455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.318482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.322949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.323067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.323095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.327640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.327742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.327769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.332106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.332217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.332244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.336485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.336629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.336657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.341343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.341440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.341467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.346243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.346323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.346351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.350949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.351020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.351048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.355331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.355411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.359725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.359819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.359846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.364350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.364438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.348 [2024-12-07 01:02:56.364465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.348 [2024-12-07 01:02:56.368852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.348 [2024-12-07 01:02:56.368921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.368950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.373313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.373381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.373416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.377632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.377710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.381974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.382061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.382089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.386414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.386570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.386598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.390853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.390953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.395334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.395428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.395455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.399801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.399873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.399900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.404188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.404285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.408603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.408756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.412988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.413075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.413103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.417356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.417433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.417460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.421697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.421777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.421804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.425982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.426069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.426096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.430294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.430388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.434867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.434956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.434983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.439364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.439451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.439479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.443723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.443805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.448206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.448275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.448303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.452909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.453009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.453037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.458447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.458533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.458560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.462842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.462919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.462946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.467252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.467322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.467364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.472569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.472645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.472673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.476993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.477075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.477103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.481509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.481602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.481629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.485706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.485782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.349 [2024-12-07 01:02:56.485810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.349 [2024-12-07 01:02:56.489941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.349 [2024-12-07 01:02:56.490038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.350 [2024-12-07 01:02:56.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.350 [2024-12-07 01:02:56.494207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.350 [2024-12-07 01:02:56.494297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.350 [2024-12-07 01:02:56.494324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.498672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.498756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.503320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.503408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.503436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.507586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.507656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.507683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.511808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.511885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.511912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.515943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.516029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.516058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:40.609 [2024-12-07 01:02:56.520357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x230f1b0) with pdu=0x200016eff3c8 00:35:40.609 [2024-12-07 01:02:56.521869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.609 [2024-12-07 01:02:56.521900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:40.609 5947.50 IOPS, 743.44 MiB/s 00:35:40.609 Latency(us) 00:35:40.609 [2024-12-07T00:02:56.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.609 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:40.609 nvme0n1 : 2.00 5946.72 743.34 0.00 0.00 2683.68 1498.83 7475.96 00:35:40.609 [2024-12-07T00:02:56.760Z] =================================================================================================================== 00:35:40.609 [2024-12-07T00:02:56.760Z] Total : 5946.72 743.34 0.00 0.00 2683.68 1498.83 7475.96 00:35:40.609 { 00:35:40.609 "results": [ 00:35:40.609 { 00:35:40.609 "job": "nvme0n1", 00:35:40.609 "core_mask": "0x2", 00:35:40.609 "workload": "randwrite", 00:35:40.609 "status": "finished", 00:35:40.609 "queue_depth": 16, 00:35:40.609 "io_size": 131072, 00:35:40.609 "runtime": 2.003624, 00:35:40.609 "iops": 5946.724535142323, 00:35:40.609 "mibps": 743.3405668927903, 00:35:40.609 "io_failed": 0, 00:35:40.609 "io_timeout": 0, 00:35:40.609 "avg_latency_us": 2683.6755691083445, 00:35:40.609 "min_latency_us": 1498.8325925925926, 00:35:40.609 "max_latency_us": 7475.958518518519 00:35:40.609 } 00:35:40.609 ], 00:35:40.609 "core_count": 1 00:35:40.609 } 00:35:40.609 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:40.609 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:40.609 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:40.609 | .driver_specific 00:35:40.609 | .nvme_error 00:35:40.609 | .status_code 00:35:40.609 | .command_transient_transport_error' 00:35:40.609 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 384 > 0 )) 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 407661 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 407661 ']' 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 407661 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407661 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407661' 00:35:40.868 killing process with pid 407661 00:35:40.868 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 407661 00:35:40.868 Received shutdown signal, test time was about 2.000000 seconds 00:35:40.868 00:35:40.869 Latency(us) 00:35:40.869 [2024-12-07T00:02:57.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.869 [2024-12-07T00:02:57.020Z] =================================================================================================================== 00:35:40.869 [2024-12-07T00:02:57.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.869 01:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 407661 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 406367 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 406367 ']' 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 406367 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406367 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406367' 00:35:41.128 killing process with pid 406367 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 406367 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 406367 00:35:41.128 00:35:41.128 real 0m14.962s 00:35:41.128 user 0m30.079s 00:35:41.128 sys 0m4.217s 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.128 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:41.128 ************************************ 00:35:41.128 END TEST nvmf_digest_error 00:35:41.128 ************************************ 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:41.387 rmmod nvme_tcp 00:35:41.387 rmmod nvme_fabrics 00:35:41.387 rmmod nvme_keyring 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 406367 ']' 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 406367 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 406367 ']' 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 406367 00:35:41.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (406367) - No such process 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 406367 is not found' 00:35:41.387 Process with pid 406367 is not found 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.387 01:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.291 00:35:43.291 real 0m35.368s 00:35:43.291 user 1m2.454s 00:35:43.291 sys 0m10.244s 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.291 ************************************ 00:35:43.291 END TEST nvmf_digest 00:35:43.291 ************************************ 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.291 01:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.551 ************************************ 00:35:43.551 START TEST nvmf_bdevperf 00:35:43.551 ************************************ 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:43.551 * Looking for test storage... 00:35:43.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:43.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.551 --rc genhtml_branch_coverage=1 00:35:43.551 --rc genhtml_function_coverage=1 00:35:43.551 --rc genhtml_legend=1 00:35:43.551 --rc geninfo_all_blocks=1 00:35:43.551 --rc geninfo_unexecuted_blocks=1 00:35:43.551 00:35:43.551 ' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:43.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.551 --rc genhtml_branch_coverage=1 00:35:43.551 --rc genhtml_function_coverage=1 00:35:43.551 --rc genhtml_legend=1 00:35:43.551 --rc geninfo_all_blocks=1 00:35:43.551 --rc geninfo_unexecuted_blocks=1 00:35:43.551 00:35:43.551 ' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:43.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.551 --rc genhtml_branch_coverage=1 00:35:43.551 --rc genhtml_function_coverage=1 00:35:43.551 --rc genhtml_legend=1 00:35:43.551 --rc geninfo_all_blocks=1 00:35:43.551 --rc geninfo_unexecuted_blocks=1 00:35:43.551 00:35:43.551 ' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:43.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.551 --rc genhtml_branch_coverage=1 00:35:43.551 --rc genhtml_function_coverage=1 00:35:43.551 --rc genhtml_legend=1 00:35:43.551 --rc geninfo_all_blocks=1 00:35:43.551 --rc geninfo_unexecuted_blocks=1 00:35:43.551 00:35:43.551 ' 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.551 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.552 01:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:45.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:45.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.506 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:45.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:45.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:35:45.801 00:35:45.801 --- 10.0.0.2 ping statistics --- 00:35:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.801 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:35:45.801 00:35:45.801 --- 10.0.0.1 ping statistics --- 00:35:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.801 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=410190 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 410190 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 410190 ']' 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.801 01:03:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.801 [2024-12-07 01:03:01.852334] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:45.801 [2024-12-07 01:03:01.852433] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.801 [2024-12-07 01:03:01.925156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:46.076 [2024-12-07 01:03:01.972399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.076 [2024-12-07 01:03:01.972456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.076 [2024-12-07 01:03:01.972470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.076 [2024-12-07 01:03:01.972481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.076 [2024-12-07 01:03:01.972502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.076 [2024-12-07 01:03:01.974068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.076 [2024-12-07 01:03:01.974126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.076 [2024-12-07 01:03:01.974122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 [2024-12-07 01:03:02.109427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 Malloc0 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.076 [2024-12-07 01:03:02.168845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:46.076 { 00:35:46.076 "params": { 00:35:46.076 "name": "Nvme$subsystem", 00:35:46.076 "trtype": "$TEST_TRANSPORT", 00:35:46.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.076 "adrfam": "ipv4", 00:35:46.076 "trsvcid": "$NVMF_PORT", 00:35:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.076 "hdgst": ${hdgst:-false}, 00:35:46.076 "ddgst": ${ddgst:-false} 00:35:46.076 }, 00:35:46.076 "method": "bdev_nvme_attach_controller" 00:35:46.076 } 00:35:46.076 EOF 00:35:46.076 )") 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:46.076 01:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:46.076 "params": { 00:35:46.076 "name": "Nvme1", 00:35:46.076 "trtype": "tcp", 00:35:46.076 "traddr": "10.0.0.2", 00:35:46.076 "adrfam": "ipv4", 00:35:46.076 "trsvcid": "4420", 00:35:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.076 "hdgst": false, 00:35:46.076 "ddgst": false 00:35:46.076 }, 00:35:46.076 "method": "bdev_nvme_attach_controller" 00:35:46.076 }' 00:35:46.076 [2024-12-07 01:03:02.217777] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:46.076 [2024-12-07 01:03:02.217865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410233 ] 00:35:46.334 [2024-12-07 01:03:02.288513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.334 [2024-12-07 01:03:02.335321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.593 Running I/O for 1 seconds... 00:35:47.531 8618.00 IOPS, 33.66 MiB/s 00:35:47.531 Latency(us) 00:35:47.531 [2024-12-07T00:03:03.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.531 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:47.531 Verification LBA range: start 0x0 length 0x4000 00:35:47.531 Nvme1n1 : 1.01 8657.58 33.82 0.00 0.00 14716.49 2973.39 14757.74 00:35:47.531 [2024-12-07T00:03:03.682Z] =================================================================================================================== 00:35:47.531 [2024-12-07T00:03:03.682Z] Total : 8657.58 33.82 0.00 0.00 14716.49 2973.39 14757.74 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=410397 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:47.792 { 00:35:47.792 "params": { 00:35:47.792 "name": "Nvme$subsystem", 00:35:47.792 "trtype": "$TEST_TRANSPORT", 00:35:47.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.792 "adrfam": "ipv4", 00:35:47.792 "trsvcid": "$NVMF_PORT", 00:35:47.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.792 "hdgst": ${hdgst:-false}, 00:35:47.792 "ddgst": ${ddgst:-false} 00:35:47.792 }, 00:35:47.792 "method": "bdev_nvme_attach_controller" 00:35:47.792 } 00:35:47.792 EOF 00:35:47.792 )") 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:47.792 01:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:47.792 "params": { 00:35:47.792 "name": "Nvme1", 00:35:47.792 "trtype": "tcp", 00:35:47.792 "traddr": "10.0.0.2", 00:35:47.792 "adrfam": "ipv4", 00:35:47.792 "trsvcid": "4420", 00:35:47.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.792 "hdgst": false, 00:35:47.792 "ddgst": false 00:35:47.792 }, 00:35:47.792 "method": "bdev_nvme_attach_controller" 00:35:47.792 }' 00:35:47.792 [2024-12-07 01:03:03.819847] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:47.792 [2024-12-07 01:03:03.819944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410397 ] 00:35:47.792 [2024-12-07 01:03:03.892591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.792 [2024-12-07 01:03:03.939468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.051 Running I/O for 15 seconds... 00:35:50.362 8483.00 IOPS, 33.14 MiB/s [2024-12-07T00:03:07.086Z] 8547.00 IOPS, 33.39 MiB/s [2024-12-07T00:03:07.086Z] 01:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 410190 00:35:50.935 01:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:50.935 [2024-12-07 01:03:06.782135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.782951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.782965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.935 [2024-12-07 01:03:06.783358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.935 [2024-12-07 01:03:06.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.783963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.783990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.936 [2024-12-07 01:03:06.784459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.936 [2024-12-07 01:03:06.784486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.936 [2024-12-07 01:03:06.784512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.936 [2024-12-07 01:03:06.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.784990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.937 [2024-12-07 01:03:06.785682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.937 [2024-12-07 01:03:06.785694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.938 [2024-12-07 01:03:06.785721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.938 [2024-12-07 01:03:06.785747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.938 [2024-12-07 01:03:06.785773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.938 [2024-12-07 01:03:06.785798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:50.938 [2024-12-07 01:03:06.785828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.785854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.785906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.786004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.786061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.938 [2024-12-07 01:03:06.786091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.786106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x602b30 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.786124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:50.938 [2024-12-07 01:03:06.786135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:50.938 [2024-12-07 01:03:06.786146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51512 len:8 PRP1 0x0 PRP2 0x0 00:35:50.938 [2024-12-07 01:03:06.786159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.938 [2024-12-07 01:03:06.789308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.789386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.790105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.790140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.790157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.790403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.790601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.790621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.790638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.790652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.938 [2024-12-07 01:03:06.803028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.803409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.803442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.803459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.803709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.803915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.803934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.803946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.803958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.938 [2024-12-07 01:03:06.816199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.816563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.816591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.816613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.816843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.817099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.817122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.817135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.817148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.938 [2024-12-07 01:03:06.829297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.829624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.829652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.829668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.829887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.830157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.830191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.830206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.830219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.938 [2024-12-07 01:03:06.842473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.842824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.842852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.842869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.843138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.843372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.843391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.843404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.843416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.938 [2024-12-07 01:03:06.855638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.938 [2024-12-07 01:03:06.856019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.938 [2024-12-07 01:03:06.856051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.938 [2024-12-07 01:03:06.856082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.938 [2024-12-07 01:03:06.856331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.938 [2024-12-07 01:03:06.856554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.938 [2024-12-07 01:03:06.856573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.938 [2024-12-07 01:03:06.856586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.938 [2024-12-07 01:03:06.856598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.868712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.869134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.869162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.869179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.869415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.869620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.869639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.869668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.869680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.881808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.882248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.882264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.882504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.882711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.882730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.882742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.882754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.894836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.895287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.895304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.895541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.895747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.895767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.895779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.895805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.908259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.908581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.908622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.908638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.908841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.909086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.909108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.909122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.909135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.921671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.922098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.922127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.922145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.922389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.922601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.922620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.922634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.922646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.935057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.935391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.935419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.935435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.935655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.935880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.935899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.935912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.935924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.948362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.948675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.948703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.948718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.948940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.949179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.949200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.949213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.949225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.961511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.961886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.961944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.939 [2024-12-07 01:03:06.962193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.939 [2024-12-07 01:03:06.962422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.939 [2024-12-07 01:03:06.962441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.939 [2024-12-07 01:03:06.962454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.939 [2024-12-07 01:03:06.962465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.939 [2024-12-07 01:03:06.974705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.939 [2024-12-07 01:03:06.975048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.939 [2024-12-07 01:03:06.975090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.939 [2024-12-07 01:03:06.975106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:06.975364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:06.975571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:06.975590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:06.975603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:06.975615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:06.987792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:06.988144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:06.988171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:06.988195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:06.988445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:06.988669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:06.988688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:06.988700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:06.988712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.000902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.001287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.001338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.001354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.001567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.001777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.001797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.001809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.001821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.014134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.014497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.014525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.014541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.014772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.014979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.015022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.015038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.015050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.027300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.027644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.027672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.027688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.027906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.028159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.028180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.028193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.028205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.040601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.040963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.041031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.041262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.041507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.041527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.041545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.041559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.054476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.054832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.054861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.054879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.055134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.055362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.055381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.055394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.055406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.940 [2024-12-07 01:03:07.067769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.940 [2024-12-07 01:03:07.068121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.940 [2024-12-07 01:03:07.068151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:50.940 [2024-12-07 01:03:07.068168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:50.940 [2024-12-07 01:03:07.068423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:50.940 [2024-12-07 01:03:07.068614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.940 [2024-12-07 01:03:07.068633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.940 [2024-12-07 01:03:07.068645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.940 [2024-12-07 01:03:07.068657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.081125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.081558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.081801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.082054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.082076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.082090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.082103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.094520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.094926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.094954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.094970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.095275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.095499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.095518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.095531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.095542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.107696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.108105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.108133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.108149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.108387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.108593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.108612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.108624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.108636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.120860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.121211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.121240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.121258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.121515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.121706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.121725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.121737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.121749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 7581.00 IOPS, 29.61 MiB/s [2024-12-07T00:03:07.352Z] [2024-12-07 01:03:07.135551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.135848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.135891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.135912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.136178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.136389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.136408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.136426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.136438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.148909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.149253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.149282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.149298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.149535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.149757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.149775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.149790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.149802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.162621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.162971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.163021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.163038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.163255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.163503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.163523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.163537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.163549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.176383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.176781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.176828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.177064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.177292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.177328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.177341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.201 [2024-12-07 01:03:07.177353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.201 [2024-12-07 01:03:07.189968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.201 [2024-12-07 01:03:07.190317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.201 [2024-12-07 01:03:07.190346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.201 [2024-12-07 01:03:07.190363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.201 [2024-12-07 01:03:07.190595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.201 [2024-12-07 01:03:07.190814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.201 [2024-12-07 01:03:07.190834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.201 [2024-12-07 01:03:07.190847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.190860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.203445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.204117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.204151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.204169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.204411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.204602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.204621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.204634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.204646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.217093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.217520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.217549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.217566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.217809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.218040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.218063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.218084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.218098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.230832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.231173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.231203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.231220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.231453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.231692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.231715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.231737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.231750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.244486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.244938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.244967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.244992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.245219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.245442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.245463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.245476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.245488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.257776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.258076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.258106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.258123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.258355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.258561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.258580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.258593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.258604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.271461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.271962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.272011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.272048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.272275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.272509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.272529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.272543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.272556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.285146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.285549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.285578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.285595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.285839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.286093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.286116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.286131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.286144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.298536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.298967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.299024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.299061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.299279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.299539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.299575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.299589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.299602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.311882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.312207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.312236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.312258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.312509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.312714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.312734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.202 [2024-12-07 01:03:07.312746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.202 [2024-12-07 01:03:07.312758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.202 [2024-12-07 01:03:07.325279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.202 [2024-12-07 01:03:07.325713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.202 [2024-12-07 01:03:07.325745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.202 [2024-12-07 01:03:07.325760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.202 [2024-12-07 01:03:07.326006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.202 [2024-12-07 01:03:07.326203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.202 [2024-12-07 01:03:07.326223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.203 [2024-12-07 01:03:07.326237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.203 [2024-12-07 01:03:07.326249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.203 [2024-12-07 01:03:07.338491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.203 [2024-12-07 01:03:07.338898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.203 [2024-12-07 01:03:07.338927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.203 [2024-12-07 01:03:07.338944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.203 [2024-12-07 01:03:07.339199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.203 [2024-12-07 01:03:07.339423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.203 [2024-12-07 01:03:07.339442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.203 [2024-12-07 01:03:07.339454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.203 [2024-12-07 01:03:07.339466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.351857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.352230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.352258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.352275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.352525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.352736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.352755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.352768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.352779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.365143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.365539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.365567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.365583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.365801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.366034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.366069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.366083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.366096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.378392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.378857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.379107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.379331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.379351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.379363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.379375] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.391580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.391988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.392040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.392056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.392293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.392499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.392519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.392536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.392549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.404888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.405246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.405291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.405527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.405749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.405769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.405781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.405793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.418187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.418613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.463 [2024-12-07 01:03:07.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.463 [2024-12-07 01:03:07.418658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.463 [2024-12-07 01:03:07.418896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.463 [2024-12-07 01:03:07.419130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.463 [2024-12-07 01:03:07.419151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.463 [2024-12-07 01:03:07.419164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.463 [2024-12-07 01:03:07.419176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.463 [2024-12-07 01:03:07.431368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.463 [2024-12-07 01:03:07.431715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.431744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.431760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.432003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.432226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.432247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.432261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.432273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.444463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.444879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.444907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.444924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.445173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.445395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.445430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.445443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.445456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.457552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.457896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.457925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.457941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.458206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.458434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.458455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.458468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.458480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.470669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.471015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.471044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.471061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.471295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.471502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.471523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.471536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.471548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.483731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.484045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.484075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.484096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.484316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.484523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.484544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.484557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.484569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.496862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.497298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.497344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.497361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.497598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.497804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.497825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.497838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.497850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.509936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.510309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.510354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.510370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.510607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.510814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.510835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.510848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.510861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.523108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.523499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.523528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.523544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.523762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.523969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.524018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.524033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.524046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.536311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.536677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.536706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.536723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.536959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.537198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.537220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.537233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.464 [2024-12-07 01:03:07.537246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.464 [2024-12-07 01:03:07.549622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.464 [2024-12-07 01:03:07.550068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.464 [2024-12-07 01:03:07.550099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.464 [2024-12-07 01:03:07.550116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.464 [2024-12-07 01:03:07.550334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.464 [2024-12-07 01:03:07.550588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.464 [2024-12-07 01:03:07.550610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.464 [2024-12-07 01:03:07.550624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.465 [2024-12-07 01:03:07.550637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.465 [2024-12-07 01:03:07.563108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.465 [2024-12-07 01:03:07.563494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-12-07 01:03:07.563523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.465 [2024-12-07 01:03:07.563539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.465 [2024-12-07 01:03:07.563776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.465 [2024-12-07 01:03:07.563987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.465 [2024-12-07 01:03:07.564021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.465 [2024-12-07 01:03:07.564036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.465 [2024-12-07 01:03:07.564054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.465 [2024-12-07 01:03:07.576467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.465 [2024-12-07 01:03:07.576824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-12-07 01:03:07.576854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.465 [2024-12-07 01:03:07.576871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.465 [2024-12-07 01:03:07.577125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.465 [2024-12-07 01:03:07.577345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.465 [2024-12-07 01:03:07.577365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.465 [2024-12-07 01:03:07.577378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.465 [2024-12-07 01:03:07.577390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.465 [2024-12-07 01:03:07.589867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.465 [2024-12-07 01:03:07.590197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-12-07 01:03:07.590227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.465 [2024-12-07 01:03:07.590244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.465 [2024-12-07 01:03:07.590485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.465 [2024-12-07 01:03:07.590703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.465 [2024-12-07 01:03:07.590723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.465 [2024-12-07 01:03:07.590736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.465 [2024-12-07 01:03:07.590748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.465 [2024-12-07 01:03:07.603121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.465 [2024-12-07 01:03:07.603521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.465 [2024-12-07 01:03:07.603549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.465 [2024-12-07 01:03:07.603565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.465 [2024-12-07 01:03:07.603799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.465 [2024-12-07 01:03:07.604032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.465 [2024-12-07 01:03:07.604054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.465 [2024-12-07 01:03:07.604067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.465 [2024-12-07 01:03:07.604079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.616435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.616806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.616833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.616849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.617094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.617317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.617353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.617366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.617379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.629765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.630116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.630146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.630163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.630404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.630630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.630650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.630663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.630675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.643281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.643707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.643724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.643969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.644226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.644249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.644264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.644278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.656724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.657050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.657080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.657097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.657321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.657550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.657570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.657583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.657595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.670138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.670533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.670562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.670578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.670821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.671122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.671145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.671159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.671172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.683531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.683969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.684013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.684257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.684479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.684500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.684513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.684526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.696758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.697106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.697135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.697152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.697388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.697593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.697619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.697632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.697644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.710050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.710404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.710433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.725 [2024-12-07 01:03:07.710449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.725 [2024-12-07 01:03:07.710686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.725 [2024-12-07 01:03:07.710891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.725 [2024-12-07 01:03:07.710912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.725 [2024-12-07 01:03:07.710926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.725 [2024-12-07 01:03:07.710938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.725 [2024-12-07 01:03:07.723392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.725 [2024-12-07 01:03:07.723705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.725 [2024-12-07 01:03:07.723747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.723782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.724020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.724262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.724294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.724324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.724346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.736682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.737078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.737108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.737125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.737368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.737560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.737579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.737591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.737613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.750160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.750552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.750580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.750596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.750835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.751082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.751104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.751117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.751130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.763282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.763704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.763732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.763748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.763985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.764209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.764231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.764244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.764256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.776474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.776883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.776912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.776929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.777192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.777403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.777423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.777436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.777449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.789505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.789857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.789884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.789899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.790145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.790359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.790379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.790391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.790402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.802654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.802989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.803026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.803059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.803306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.803560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.803582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.803595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.803608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.815898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.816292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.816321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.816352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.816580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.816770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.816791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.816804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.816816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.829129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.829506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.829522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.829765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.829969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.830020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.830035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.830064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.842389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.726 [2024-12-07 01:03:07.842745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.726 [2024-12-07 01:03:07.842773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.726 [2024-12-07 01:03:07.842790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.726 [2024-12-07 01:03:07.843037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.726 [2024-12-07 01:03:07.843243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.726 [2024-12-07 01:03:07.843262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.726 [2024-12-07 01:03:07.843275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.726 [2024-12-07 01:03:07.843287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.726 [2024-12-07 01:03:07.855693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.727 [2024-12-07 01:03:07.856047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.727 [2024-12-07 01:03:07.856075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.727 [2024-12-07 01:03:07.856091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.727 [2024-12-07 01:03:07.856311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.727 [2024-12-07 01:03:07.856519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.727 [2024-12-07 01:03:07.856538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.727 [2024-12-07 01:03:07.856551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.727 [2024-12-07 01:03:07.856562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.727 [2024-12-07 01:03:07.868921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.727 [2024-12-07 01:03:07.869341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.727 [2024-12-07 01:03:07.869370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.727 [2024-12-07 01:03:07.869386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.727 [2024-12-07 01:03:07.869610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.727 [2024-12-07 01:03:07.869837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.727 [2024-12-07 01:03:07.869863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.727 [2024-12-07 01:03:07.869877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.727 [2024-12-07 01:03:07.869890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.882112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.882494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.882522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.882538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.882755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.882963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.883005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.883021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.883039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.895324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.895729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.895757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.895773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.896020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.896223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.896243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.896256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.896270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.908527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.908922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.908976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.908992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.909264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.909470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.909490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.909503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.909519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.921796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.922196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.922269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.922285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.922533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.922740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.922774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.922787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.922799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.935100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.935586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.935641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.935657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.935903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.936103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.936123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.936136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.936148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.948413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.948824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.948876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.948892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.949149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.949359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.949379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.949392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.949404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.961597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.962055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.962089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.962106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.962357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.962563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.962582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.962595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.962607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.974771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.975121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.975159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.975193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.975431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.975627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.975647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.975660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.975672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:07.988084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:07.988478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.987 [2024-12-07 01:03:07.988506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.987 [2024-12-07 01:03:07.988523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.987 [2024-12-07 01:03:07.988758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.987 [2024-12-07 01:03:07.988963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.987 [2024-12-07 01:03:07.989009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.987 [2024-12-07 01:03:07.989024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.987 [2024-12-07 01:03:07.989037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.987 [2024-12-07 01:03:08.001255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.987 [2024-12-07 01:03:08.001677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.001705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.001721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.001962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.002201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.002223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.002237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.002249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.014465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.014744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.014802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.015027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.015247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.015269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.015283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.015296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.027657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.028012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.028041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.028058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.028290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.028498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.028519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.028531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.028545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.040820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.041194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.041224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.041241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.041493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.041698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.041722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.041737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.041753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.053991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.054424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.054455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.054472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.054714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.054948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.054986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.055012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.055043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.067257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.067683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.067736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.067752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.067992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.068219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.068240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.068253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.068266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.080475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.080851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.080867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.081135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.081364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.081385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.081398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.081410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.093660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.094072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.094101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.094117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.094351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.094558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.094578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.094591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.094603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.106954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.107329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.107358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.107374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.107611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.107817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.107837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.107849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.107861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.120222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.120551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.988 [2024-12-07 01:03:08.120592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.988 [2024-12-07 01:03:08.120608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.988 [2024-12-07 01:03:08.120805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.988 [2024-12-07 01:03:08.121053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.988 [2024-12-07 01:03:08.121075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.988 [2024-12-07 01:03:08.121088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.988 [2024-12-07 01:03:08.121101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.988 [2024-12-07 01:03:08.133685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.988 [2024-12-07 01:03:08.134060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.989 [2024-12-07 01:03:08.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:51.989 [2024-12-07 01:03:08.134109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:51.989 [2024-12-07 01:03:08.134328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:51.989 [2024-12-07 01:03:08.134540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.989 [2024-12-07 01:03:08.134560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.989 [2024-12-07 01:03:08.134574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.989 [2024-12-07 01:03:08.134586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 5685.75 IOPS, 22.21 MiB/s [2024-12-07T00:03:08.401Z] [2024-12-07 01:03:08.146867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.147253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.147282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.147298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.147533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.147730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.147750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.147779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.147791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.160153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.160464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.160492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.160508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.160727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.160934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.160954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.160966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.160978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.173434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.173840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.173869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.173885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.174143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.174359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.174395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.174407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.174419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.186584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.186926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.186956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.186973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.187240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.187463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.187483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.187496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.187509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.199802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.200137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.200167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.200183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.200418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.200623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.200642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.200655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.200666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.212984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.213336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.213363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.213379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.213610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.213815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.213835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.213853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.213867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.226267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.226589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.226663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.226679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.226911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.227155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.227177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.227191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.227203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.239672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.240042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.240060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.240293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.240506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.240527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.240540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.240553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.253050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.250 [2024-12-07 01:03:08.253481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.250 [2024-12-07 01:03:08.253511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.250 [2024-12-07 01:03:08.253527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.250 [2024-12-07 01:03:08.253775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.250 [2024-12-07 01:03:08.253966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.250 [2024-12-07 01:03:08.254015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.250 [2024-12-07 01:03:08.254032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.250 [2024-12-07 01:03:08.254046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.250 [2024-12-07 01:03:08.266478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.266830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.266859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.266875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.267116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.267364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.267384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.267397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.267409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.280046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.280379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.280421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.280659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.280876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.280896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.280925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.280938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.293587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.294008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.294052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.294069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.294299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.294506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.294525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.294538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.294549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.307120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.307533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.307568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.307586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.307817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.308075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.308098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.308112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.308126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.320561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.320917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.320945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.320962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.321216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.321448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.321479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.321492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.321504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.334068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.334480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.334512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.334545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.334782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.334993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.335026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.335043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.335056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.347405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.347715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.347758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.347775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.348010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.348244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.348276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.348289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.348302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.360707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.361089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.361118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.361134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.361359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.361583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.361604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.361616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.361629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.374042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.374415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.374443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.374458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.374676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.251 [2024-12-07 01:03:08.374888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.251 [2024-12-07 01:03:08.374909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.251 [2024-12-07 01:03:08.374922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.251 [2024-12-07 01:03:08.374945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.251 [2024-12-07 01:03:08.387395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.251 [2024-12-07 01:03:08.387757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.251 [2024-12-07 01:03:08.387785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.251 [2024-12-07 01:03:08.387800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.251 [2024-12-07 01:03:08.388043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.252 [2024-12-07 01:03:08.388262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.252 [2024-12-07 01:03:08.388294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.252 [2024-12-07 01:03:08.388316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.252 [2024-12-07 01:03:08.388330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.513 [2024-12-07 01:03:08.400861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.513 [2024-12-07 01:03:08.401194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.513 [2024-12-07 01:03:08.401237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.513 [2024-12-07 01:03:08.401253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.513 [2024-12-07 01:03:08.401479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.513 [2024-12-07 01:03:08.401690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.513 [2024-12-07 01:03:08.401711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.513 [2024-12-07 01:03:08.401725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.513 [2024-12-07 01:03:08.401738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.513 [2024-12-07 01:03:08.414220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.513 [2024-12-07 01:03:08.414652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.513 [2024-12-07 01:03:08.414681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.513 [2024-12-07 01:03:08.414698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.513 [2024-12-07 01:03:08.414937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.513 [2024-12-07 01:03:08.415183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.513 [2024-12-07 01:03:08.415205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.513 [2024-12-07 01:03:08.415219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.513 [2024-12-07 01:03:08.415232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.513 [2024-12-07 01:03:08.427518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.513 [2024-12-07 01:03:08.427817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.513 [2024-12-07 01:03:08.427861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.513 [2024-12-07 01:03:08.427877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.513 [2024-12-07 01:03:08.428130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.513 [2024-12-07 01:03:08.428363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.513 [2024-12-07 01:03:08.428384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.513 [2024-12-07 01:03:08.428396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.513 [2024-12-07 01:03:08.428408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.513 [2024-12-07 01:03:08.440908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.513 [2024-12-07 01:03:08.441336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.513 [2024-12-07 01:03:08.441366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.513 [2024-12-07 01:03:08.441383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.513 [2024-12-07 01:03:08.441622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.441821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.441842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.441855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.441868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.454194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.454581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.454611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.454628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.454870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.455109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.455131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.455145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.455158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.467450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.467865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.467894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.467910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.468164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.468382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.468403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.468416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.468430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.480826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.481205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.481235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.481258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.481512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.481725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.481744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.481757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.481770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.494211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.494534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.494576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.494592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.494811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.495049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.495070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.495084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.495096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.507584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.507909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.507937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.507954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.508194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.508426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.508446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.508459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.508471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.520920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.521242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.521286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.521303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.521535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.521751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.521772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.521786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.521798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.534252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.534603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.534619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.534837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.535096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.535133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.535147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.535160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.547599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.548083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.548114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.548130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.548374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.548586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.548607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.548620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.548632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.560868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.561320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.561350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.561368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.561615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.514 [2024-12-07 01:03:08.561873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.514 [2024-12-07 01:03:08.561896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.514 [2024-12-07 01:03:08.561917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.514 [2024-12-07 01:03:08.561932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.514 [2024-12-07 01:03:08.574270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.514 [2024-12-07 01:03:08.574640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.514 [2024-12-07 01:03:08.574670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.514 [2024-12-07 01:03:08.574686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.514 [2024-12-07 01:03:08.574930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.575177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.575199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.575213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.575225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.587561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.587912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.587939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.587955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.588210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.588446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.588467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.588480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.588493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.600940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.601384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.601414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.601430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.601675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.601886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.601906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.601919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.601931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.614269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.614643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.614674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.614691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.614938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.615177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.615198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.615211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.615224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.627506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.627924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.627955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.627972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.628227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.628469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.628492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.628506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.628518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.640857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.641272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.641302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.641333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.641560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.641788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.641809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.641822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.641835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.515 [2024-12-07 01:03:08.654103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.515 [2024-12-07 01:03:08.654451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.515 [2024-12-07 01:03:08.654479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.515 [2024-12-07 01:03:08.654501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.515 [2024-12-07 01:03:08.654725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.515 [2024-12-07 01:03:08.654937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.515 [2024-12-07 01:03:08.654958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.515 [2024-12-07 01:03:08.654971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.515 [2024-12-07 01:03:08.654985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.667636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.667992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.668051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.668068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.668300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.668529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.668549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.668562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.668574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.681049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.681452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.681482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.681498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.681746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.681958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.682001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.682017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.682030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.694417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.694770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.694799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.694815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.695066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.695310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.695347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.695361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.695374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.707676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.708095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.708125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.708142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.708386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.708582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.708602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.708615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.708627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.720923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.721301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.721330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.721347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.721589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.721785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.721805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.721817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.721830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.734180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.734635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.734663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.734680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.734918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.735163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.778 [2024-12-07 01:03:08.735186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.778 [2024-12-07 01:03:08.735205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.778 [2024-12-07 01:03:08.735219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.778 [2024-12-07 01:03:08.747475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.778 [2024-12-07 01:03:08.747788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.778 [2024-12-07 01:03:08.747817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.778 [2024-12-07 01:03:08.747834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.778 [2024-12-07 01:03:08.748073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.778 [2024-12-07 01:03:08.748310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.748330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.748344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.748371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.760827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.761243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.761273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.761290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.761544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.761740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.761761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.761774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.761787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.774158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.774545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.774574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.774591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.774835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.775088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.775111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.775126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.775139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.787570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.787893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.787921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.787937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.788204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.788421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.788442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.788454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.788467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.800838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.801281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.801310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.801326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.801571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.801782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.801802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.801815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.801827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.814189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.814581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.814611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.814627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.814862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.815134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.815158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.815173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.815187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.827819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.828165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.828195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.828217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.828453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.828665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.828685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.828698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.828710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.841270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.841715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.841743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.841759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.842005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.842207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.842227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.842240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.842260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.854685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.855072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.855101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.855116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.855334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.855545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.855565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.855578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.855590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.868000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.868448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.868477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.868494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.868737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.868954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.779 [2024-12-07 01:03:08.868974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.779 [2024-12-07 01:03:08.868987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.779 [2024-12-07 01:03:08.869022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.779 [2024-12-07 01:03:08.881321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.779 [2024-12-07 01:03:08.881676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.779 [2024-12-07 01:03:08.881704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.779 [2024-12-07 01:03:08.881720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.779 [2024-12-07 01:03:08.881958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.779 [2024-12-07 01:03:08.882168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.780 [2024-12-07 01:03:08.882190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.780 [2024-12-07 01:03:08.882203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.780 [2024-12-07 01:03:08.882215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.780 [2024-12-07 01:03:08.894602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.780 [2024-12-07 01:03:08.894956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.780 [2024-12-07 01:03:08.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.780 [2024-12-07 01:03:08.895012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.780 [2024-12-07 01:03:08.895257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.780 [2024-12-07 01:03:08.895469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.780 [2024-12-07 01:03:08.895489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.780 [2024-12-07 01:03:08.895503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.780 [2024-12-07 01:03:08.895515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.780 [2024-12-07 01:03:08.907920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.780 [2024-12-07 01:03:08.908305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.780 [2024-12-07 01:03:08.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.780 [2024-12-07 01:03:08.908366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.780 [2024-12-07 01:03:08.908604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.780 [2024-12-07 01:03:08.908815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.780 [2024-12-07 01:03:08.908836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.780 [2024-12-07 01:03:08.908849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.780 [2024-12-07 01:03:08.908868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.780 [2024-12-07 01:03:08.921235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.780 [2024-12-07 01:03:08.921685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.780 [2024-12-07 01:03:08.921715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:52.780 [2024-12-07 01:03:08.921731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:52.780 [2024-12-07 01:03:08.921973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:52.780 [2024-12-07 01:03:08.922212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.780 [2024-12-07 01:03:08.922233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.780 [2024-12-07 01:03:08.922246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.780 [2024-12-07 01:03:08.922259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:08.934630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:08.935061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:08.935091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:08.935108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:08.935353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:08.935566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:08.935586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:08.935599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:08.935612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:08.948032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:08.948428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:08.948456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:08.948472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:08.948708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:08.948920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:08.948941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:08.948953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:08.948966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:08.961310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:08.961667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:08.961696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:08.961713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:08.961950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:08.962193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:08.962217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:08.962232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:08.962255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:08.974700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:08.975065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:08.975095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:08.975112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:08.975358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:08.975570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:08.975590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:08.975603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:08.975616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:08.988061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:08.988421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:08.988450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:08.988467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:08.988691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:08.988903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:08.988924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:08.988937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:08.988950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.040 [2024-12-07 01:03:09.001353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.040 [2024-12-07 01:03:09.001708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.040 [2024-12-07 01:03:09.001737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.040 [2024-12-07 01:03:09.001753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.040 [2024-12-07 01:03:09.002003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.040 [2024-12-07 01:03:09.002238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.040 [2024-12-07 01:03:09.002261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.040 [2024-12-07 01:03:09.002275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.040 [2024-12-07 01:03:09.002288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.014623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.015049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.015080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.015096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.015343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.015555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.015574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.015587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.015599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.027906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.028223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.028267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.028283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.028509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.028721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.028741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.028754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.028766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.041249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.041626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.041654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.041671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.041908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.042148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.042177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.042191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.042204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.054610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.054993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.055055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.055073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.055307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.055521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.055541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.055554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.055566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.067957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.068358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.068406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.068639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.068902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.068925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.068941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.068955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.081377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.081794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.081823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.081839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.082096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.082342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.082363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.082377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.082394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.094640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.095054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.095084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.095101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.095349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.095561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.095581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.095593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.095606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.107984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.108307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.108349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.108366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.108593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.108806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.108826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.108838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.108850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.121316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.041 [2024-12-07 01:03:09.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.041 [2024-12-07 01:03:09.121748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.041 [2024-12-07 01:03:09.121764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.041 [2024-12-07 01:03:09.121989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.041 [2024-12-07 01:03:09.122207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.041 [2024-12-07 01:03:09.122228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.041 [2024-12-07 01:03:09.122250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.041 [2024-12-07 01:03:09.122263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.041 [2024-12-07 01:03:09.134709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.042 [2024-12-07 01:03:09.135039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.042 [2024-12-07 01:03:09.135068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.042 [2024-12-07 01:03:09.135084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.042 [2024-12-07 01:03:09.135311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.042 [2024-12-07 01:03:09.135523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.042 [2024-12-07 01:03:09.135543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.042 [2024-12-07 01:03:09.135556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.042 [2024-12-07 01:03:09.135567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.042 4548.60 IOPS, 17.77 MiB/s [2024-12-07T00:03:09.193Z] [2024-12-07 01:03:09.148099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.042 [2024-12-07 01:03:09.148556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.042 [2024-12-07 01:03:09.148584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.042 [2024-12-07 01:03:09.148600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.042 [2024-12-07 01:03:09.148837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.042 [2024-12-07 01:03:09.149091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.042 [2024-12-07 01:03:09.149113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.042 [2024-12-07 01:03:09.149127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.042 [2024-12-07 01:03:09.149140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.042 [2024-12-07 01:03:09.161524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.042 [2024-12-07 01:03:09.161876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.042 [2024-12-07 01:03:09.161904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.042 [2024-12-07 01:03:09.161920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.042 [2024-12-07 01:03:09.162185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.042 [2024-12-07 01:03:09.162400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.042 [2024-12-07 01:03:09.162420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.042 [2024-12-07 01:03:09.162433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.042 [2024-12-07 01:03:09.162446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.042 [2024-12-07 01:03:09.174830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.042 [2024-12-07 01:03:09.175272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.042 [2024-12-07 01:03:09.175315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.042 [2024-12-07 01:03:09.175332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.042 [2024-12-07 01:03:09.175574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.042 [2024-12-07 01:03:09.175786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.042 [2024-12-07 01:03:09.175805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.042 [2024-12-07 01:03:09.175818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.042 [2024-12-07 01:03:09.175830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.188260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.188609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.188638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.188654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.188886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.189152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.189189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.189203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.189216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.201550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.201901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.201930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.201947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.202188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.202437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.202457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.202470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.202482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.214889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.215269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.215298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.215315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.215553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.215764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.215788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.215802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.215815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.228254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.228689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.228717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.228734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.228977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.229209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.229231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.229244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.229258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.241596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.241953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.241982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.242008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.242255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.242485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.242505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.242518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.242530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.254893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.255367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.255383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.255627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.255825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.255844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.255857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.255873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.268537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.268955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.268984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.269009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.269228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.269445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.269465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.269478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.269490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.281856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.282210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.282239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.282256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.282484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.282696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.282716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.282729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.302 [2024-12-07 01:03:09.282741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.302 [2024-12-07 01:03:09.295394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.302 [2024-12-07 01:03:09.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.302 [2024-12-07 01:03:09.295778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.302 [2024-12-07 01:03:09.295794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.302 [2024-12-07 01:03:09.296032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.302 [2024-12-07 01:03:09.296257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.302 [2024-12-07 01:03:09.296294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.302 [2024-12-07 01:03:09.296308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.296321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.308629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.308952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.308979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.309017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.309279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.309511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.309531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.309544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.309555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.321918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.322303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.322349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.322595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.322803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.322824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.322853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.322866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.335351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.335747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.335777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.335793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.336046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.336255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.336276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.336305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.336317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.348854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.349296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.349326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.349343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.349594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.349790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.349810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.349822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.349835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.362142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.362579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.362607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.362623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.362861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.363127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.363148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.363162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.363175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.375472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.375850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.375866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.376133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.376349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.376369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.376382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.376394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.388803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.389256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.389307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.389324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.389582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.389779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.389803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.389817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.389829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.402174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.402517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.402545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.402562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.402785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.403022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.303 [2024-12-07 01:03:09.403049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.303 [2024-12-07 01:03:09.403063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.303 [2024-12-07 01:03:09.403076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.303 [2024-12-07 01:03:09.415636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.303 [2024-12-07 01:03:09.416022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.303 [2024-12-07 01:03:09.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.303 [2024-12-07 01:03:09.416068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.303 [2024-12-07 01:03:09.416301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.303 [2024-12-07 01:03:09.416529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.304 [2024-12-07 01:03:09.416549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.304 [2024-12-07 01:03:09.416562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.304 [2024-12-07 01:03:09.416574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.304 [2024-12-07 01:03:09.429232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.304 [2024-12-07 01:03:09.429574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-12-07 01:03:09.429602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.304 [2024-12-07 01:03:09.429619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.304 [2024-12-07 01:03:09.429844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.304 [2024-12-07 01:03:09.430110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.304 [2024-12-07 01:03:09.430132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.304 [2024-12-07 01:03:09.430145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.304 [2024-12-07 01:03:09.430162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.304 [2024-12-07 01:03:09.442590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.304 [2024-12-07 01:03:09.443019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.304 [2024-12-07 01:03:09.443063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.304 [2024-12-07 01:03:09.443080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.304 [2024-12-07 01:03:09.443319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.304 [2024-12-07 01:03:09.443532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.304 [2024-12-07 01:03:09.443559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.304 [2024-12-07 01:03:09.443571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.304 [2024-12-07 01:03:09.443583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.565 [2024-12-07 01:03:09.456052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.565 [2024-12-07 01:03:09.456461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.456489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.456517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.456752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.456958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.457007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.457025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.457038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.469382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.469662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.469704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.469720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.469917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.470174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.470197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.470212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.470226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.482715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.483067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.483101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.483118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.483362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.483573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.483594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.483607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.483619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.496392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.496741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.496771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.496788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.497016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.497239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.497260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.497275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.497292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.510115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.510555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.510584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.510600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.510845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.511080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.511103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.511118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.511131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.523468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.523877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.523904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.523920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.524168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.524391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.524410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.524423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.524434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.536804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.537142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.537172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.537189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.537430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.537635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.537654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.537667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.537678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.550325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.550657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.550686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.550703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.550921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.551164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.551185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.551199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.551212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.563590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.563937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.563966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.564006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.564253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.564467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.564492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.564506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.564519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.566 [2024-12-07 01:03:09.576860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.566 [2024-12-07 01:03:09.577314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.566 [2024-12-07 01:03:09.577344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.566 [2024-12-07 01:03:09.577361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.566 [2024-12-07 01:03:09.577606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.566 [2024-12-07 01:03:09.577861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.566 [2024-12-07 01:03:09.577884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.566 [2024-12-07 01:03:09.577898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.566 [2024-12-07 01:03:09.577912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.590129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.590638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.590693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.590708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.590948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.591189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.591211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.591225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.591238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.603347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.603693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.603722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.603739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.603978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.604204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.604227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.604240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.604253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.616470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.616788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.616859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.616875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.617118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.617329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.617348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.617361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.617373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.629583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.630047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.630077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.630093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.630339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.630529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.630549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.630562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.630574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.642904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.643344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.643375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.643392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.643631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.643821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.643841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.643855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.643868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.656127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.656506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.656554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.656571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.656790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.657024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.657061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.657076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.657089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.669181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.669529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.669559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.669576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.669814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.670049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.670071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.670085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.670099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.682345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.682715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.682762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.683012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.683231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.683253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.683266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.683279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.695494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.695842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.695871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.695886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.696143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.696388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.696409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.567 [2024-12-07 01:03:09.696422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.567 [2024-12-07 01:03:09.696434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.567 [2024-12-07 01:03:09.708711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.567 [2024-12-07 01:03:09.709143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.567 [2024-12-07 01:03:09.709174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.567 [2024-12-07 01:03:09.709192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.567 [2024-12-07 01:03:09.709443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.567 [2024-12-07 01:03:09.709650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.567 [2024-12-07 01:03:09.709671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.568 [2024-12-07 01:03:09.709684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.568 [2024-12-07 01:03:09.709696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.829 [2024-12-07 01:03:09.722058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.829 [2024-12-07 01:03:09.722441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.829 [2024-12-07 01:03:09.722469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.829 [2024-12-07 01:03:09.722485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.829 [2024-12-07 01:03:09.722703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.829 [2024-12-07 01:03:09.722909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.829 [2024-12-07 01:03:09.722930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.829 [2024-12-07 01:03:09.722942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.829 [2024-12-07 01:03:09.722955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.829 [2024-12-07 01:03:09.735208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.829 [2024-12-07 01:03:09.735633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.829 [2024-12-07 01:03:09.735663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.829 [2024-12-07 01:03:09.735680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.829 [2024-12-07 01:03:09.735917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.829 [2024-12-07 01:03:09.736156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.736178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.736198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.736212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.748388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.748812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.748842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.748858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.749111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.749329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.749350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.749378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.749391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.761517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.761860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.761889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.761905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.762173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.762406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.762427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.762439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.762452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.774662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.775070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.775098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.775113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.775345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.775558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 410190 Killed "${NVMF_APP[@]}" "$@" 00:35:53.830 [2024-12-07 01:03:09.775579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.775593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.775610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=411655 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 411655 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 411655 ']' 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.830 01:03:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:53.830 [2024-12-07 01:03:09.788079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.788516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.788543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.788559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.788800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.789058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.789080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.789094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.789108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.801480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.801834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.801877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.801893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.802147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.802386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.802406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.802419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.802436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.814915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.815436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.815466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.815483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.815726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.815937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.815957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.815970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.816009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.827277] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:53.830 [2024-12-07 01:03:09.827349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.830 [2024-12-07 01:03:09.828491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.828853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.828883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.828900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.829128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.829350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.829373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.830 [2024-12-07 01:03:09.829388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.830 [2024-12-07 01:03:09.829401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.830 [2024-12-07 01:03:09.842004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.830 [2024-12-07 01:03:09.842406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.830 [2024-12-07 01:03:09.842435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.830 [2024-12-07 01:03:09.842451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.830 [2024-12-07 01:03:09.842675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.830 [2024-12-07 01:03:09.842886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.830 [2024-12-07 01:03:09.842906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.842920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.842938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.855454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.855812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.855841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.855858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.856111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.856344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.856364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.856377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.856389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.868873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.869303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.869333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.869349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.869598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.869794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.869812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.869825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.869837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.882333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.882639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.882666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.882683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.882900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.883142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.883164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.883177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.883189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.895554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.895985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.896022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.896041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.896272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.896485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.896506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.896519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.896531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.902547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:53.831 [2024-12-07 01:03:09.908883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.909361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.909391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.909420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.909663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.909861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.909881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.909896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.909909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.922278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.922760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.922826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.923052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.923265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.923287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.923326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.923340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.935658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.936036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.936075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.936104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.936342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.936538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.936557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.936571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.936583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.948188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.831 [2024-12-07 01:03:09.948222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.831 [2024-12-07 01:03:09.948244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.831 [2024-12-07 01:03:09.948256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.831 [2024-12-07 01:03:09.948265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.831 [2024-12-07 01:03:09.948960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.949466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.949495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.949521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.949741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.949715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:53.831 [2024-12-07 01:03:09.949836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:53.831 [2024-12-07 01:03:09.949839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.831 [2024-12-07 01:03:09.949964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.950019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.950037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.950051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.831 [2024-12-07 01:03:09.962534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.831 [2024-12-07 01:03:09.963055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.831 [2024-12-07 01:03:09.963096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.831 [2024-12-07 01:03:09.963128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.831 [2024-12-07 01:03:09.963381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:53.831 [2024-12-07 01:03:09.963596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:53.831 [2024-12-07 01:03:09.963618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:53.831 [2024-12-07 01:03:09.963635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:53.831 [2024-12-07 01:03:09.963662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:53.832 [2024-12-07 01:03:09.976307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:53.832 [2024-12-07 01:03:09.976814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.832 [2024-12-07 01:03:09.976863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:53.832 [2024-12-07 01:03:09.976884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:53.832 [2024-12-07 01:03:09.977130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:09.977373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:09.977397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:09.977415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:09.977430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:09.989806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:09.990311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:09.990361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:09.990382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:09.990638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:09.990852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:09.990884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:09.990901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:09.990917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.003947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.004485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:10.004534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:10.004555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:10.004782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:10.005023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:10.005048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:10.005065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:10.005081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.017801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.018376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:10.018426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:10.018448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:10.018692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:10.018912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:10.018949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:10.018967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:10.018991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.031680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.032199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:10.032247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:10.032269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:10.032522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:10.032760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:10.032783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:10.032801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:10.032816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.045581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.045966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:10.046011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:10.046031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:10.046249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:10.046501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:10.046522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:10.046537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:10.046551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.059324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.059653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.093 [2024-12-07 01:03:10.059697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.093 [2024-12-07 01:03:10.059715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.093 [2024-12-07 01:03:10.059955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.093 [2024-12-07 01:03:10.060213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.093 [2024-12-07 01:03:10.060236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.093 [2024-12-07 01:03:10.060251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.093 [2024-12-07 01:03:10.060275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.093 [2024-12-07 01:03:10.073008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.093 [2024-12-07 01:03:10.073378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.073409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.073427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.073645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.073869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.073892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.073908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.073922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.094 [2024-12-07 01:03:10.086672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.087025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.087056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.087074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.087291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.087514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.087537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.087551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.087565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 [2024-12-07 01:03:10.100196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.100566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.100589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.100821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.101077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.101101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.101117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.101130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.094 [2024-12-07 01:03:10.108258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.094 [2024-12-07 01:03:10.113775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.114195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.114212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.094 [2024-12-07 01:03:10.114429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.094 [2024-12-07 01:03:10.114665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.114688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.114703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.114716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.094 [2024-12-07 01:03:10.127391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.127825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.127858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.127877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.128110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.128363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.128386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.128409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.128424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 [2024-12-07 01:03:10.141234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 3790.50 IOPS, 14.81 MiB/s [2024-12-07T00:03:10.245Z] [2024-12-07 01:03:10.143262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.143292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.143309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.143540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.143764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.143801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.143816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.143829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 [2024-12-07 01:03:10.154902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.155375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.155413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.155433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.155688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.094 [2024-12-07 01:03:10.155937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.094 [2024-12-07 01:03:10.155962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.094 [2024-12-07 01:03:10.155979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.094 [2024-12-07 01:03:10.156004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.094 Malloc0 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.094 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.094 [2024-12-07 01:03:10.168542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.094 [2024-12-07 01:03:10.168974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.094 [2024-12-07 01:03:10.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef980 with addr=10.0.0.2, port=4420 00:35:54.094 [2024-12-07 01:03:10.169038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef980 is same with the state(6) to be set 00:35:54.094 [2024-12-07 01:03:10.169257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef980 (9): Bad file descriptor 00:35:54.095 [2024-12-07 01:03:10.169489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:54.095 [2024-12-07 01:03:10.169512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:54.095 [2024-12-07 01:03:10.169525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:54.095 [2024-12-07 01:03:10.169539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:54.095 [2024-12-07 01:03:10.176010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.095 01:03:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 410397 00:35:54.095 [2024-12-07 01:03:10.182215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:54.095 [2024-12-07 01:03:10.206981] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:56.408 4378.29 IOPS, 17.10 MiB/s [2024-12-07T00:03:13.492Z] 4888.62 IOPS, 19.10 MiB/s [2024-12-07T00:03:14.426Z] 5293.44 IOPS, 20.68 MiB/s [2024-12-07T00:03:15.362Z] 5615.60 IOPS, 21.94 MiB/s [2024-12-07T00:03:16.295Z] 5884.00 IOPS, 22.98 MiB/s [2024-12-07T00:03:17.226Z] 6106.33 IOPS, 23.85 MiB/s [2024-12-07T00:03:18.597Z] 6289.69 IOPS, 24.57 MiB/s [2024-12-07T00:03:19.528Z] 6448.43 IOPS, 25.19 MiB/s 00:36:03.377 Latency(us) 00:36:03.377 [2024-12-07T00:03:19.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.377 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:03.377 Verification LBA range: start 0x0 length 0x4000 00:36:03.377 Nvme1n1 : 15.01 6583.02 25.71 9822.02 0.00 7779.32 637.16 22913.33 00:36:03.377 [2024-12-07T00:03:19.528Z] =================================================================================================================== 00:36:03.377 [2024-12-07T00:03:19.528Z] Total : 6583.02 25.71 9822.02 0.00 7779.32 637.16 22913.33 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.377 rmmod nvme_tcp 00:36:03.377 rmmod nvme_fabrics 00:36:03.377 rmmod nvme_keyring 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 411655 ']' 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 411655 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 411655 ']' 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 411655 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411655 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411655' 00:36:03.377 killing process with pid 411655 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 411655 00:36:03.377 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 411655 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.634 01:03:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.173 00:36:06.173 real 0m22.245s 00:36:06.173 user 0m59.506s 00:36:06.173 sys 0m4.191s 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:06.173 ************************************ 00:36:06.173 END TEST nvmf_bdevperf 00:36:06.173 ************************************ 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.173 ************************************ 00:36:06.173 START TEST nvmf_target_disconnect 00:36:06.173 ************************************ 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:06.173 * Looking for test storage... 00:36:06.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.173 --rc genhtml_branch_coverage=1 00:36:06.173 --rc genhtml_function_coverage=1 00:36:06.173 --rc genhtml_legend=1 00:36:06.173 --rc geninfo_all_blocks=1 00:36:06.173 --rc geninfo_unexecuted_blocks=1 00:36:06.173 00:36:06.173 ' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.173 --rc genhtml_branch_coverage=1 00:36:06.173 --rc genhtml_function_coverage=1 00:36:06.173 --rc genhtml_legend=1 00:36:06.173 --rc geninfo_all_blocks=1 00:36:06.173 --rc geninfo_unexecuted_blocks=1 00:36:06.173 00:36:06.173 ' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.173 --rc genhtml_branch_coverage=1 00:36:06.173 --rc genhtml_function_coverage=1 00:36:06.173 --rc genhtml_legend=1 00:36:06.173 --rc geninfo_all_blocks=1 00:36:06.173 --rc geninfo_unexecuted_blocks=1 00:36:06.173 00:36:06.173 ' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:06.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.173 --rc genhtml_branch_coverage=1 00:36:06.173 --rc genhtml_function_coverage=1 00:36:06.173 --rc genhtml_legend=1 00:36:06.173 --rc geninfo_all_blocks=1 00:36:06.173 --rc geninfo_unexecuted_blocks=1 00:36:06.173 00:36:06.173 ' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.173 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:06.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.174 01:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:08.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:08.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:08.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:08.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.079 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.080 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:36:08.339 00:36:08.339 --- 10.0.0.2 ping statistics --- 00:36:08.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.339 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:36:08.339 00:36:08.339 --- 10.0.0.1 ping statistics --- 00:36:08.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.339 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.339 ************************************ 00:36:08.339 START TEST nvmf_target_disconnect_tc1 00:36:08.339 ************************************ 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:08.339 [2024-12-07 01:03:24.372056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.339 [2024-12-07 01:03:24.372133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1964620 with addr=10.0.0.2, port=4420 00:36:08.339 [2024-12-07 01:03:24.372188] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:08.339 [2024-12-07 01:03:24.372213] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:08.339 [2024-12-07 01:03:24.372228] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:08.339 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:08.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:08.339 Initializing NVMe Controllers 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:08.339 00:36:08.339 real 0m0.099s 00:36:08.339 user 0m0.041s 00:36:08.339 sys 0m0.058s 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:08.339 ************************************ 00:36:08.339 END TEST nvmf_target_disconnect_tc1 00:36:08.339 ************************************ 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:08.339 ************************************ 00:36:08.339 START TEST nvmf_target_disconnect_tc2 00:36:08.339 ************************************ 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=414807 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 414807 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 414807 ']' 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.339 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.339 [2024-12-07 01:03:24.483720] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:36:08.339 [2024-12-07 01:03:24.483800] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.599 [2024-12-07 01:03:24.557899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:08.599 [2024-12-07 01:03:24.606581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.599 [2024-12-07 01:03:24.606635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.599 [2024-12-07 01:03:24.606649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.599 [2024-12-07 01:03:24.606660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.599 [2024-12-07 01:03:24.606669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.599 [2024-12-07 01:03:24.608296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:08.599 [2024-12-07 01:03:24.608341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:08.599 [2024-12-07 01:03:24.608397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:08.599 [2024-12-07 01:03:24.608400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:08.599 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.599 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:08.599 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:08.599 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.599 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.857 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 Malloc0 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 [2024-12-07 01:03:24.798641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 [2024-12-07 01:03:24.826885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=414842 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:08.858 01:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:10.766 01:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 414807 00:36:10.766 01:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.766 Read completed with error (sct=0, sc=8) 00:36:10.766 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 [2024-12-07 01:03:26.853190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 [2024-12-07 01:03:26.853516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Write completed with error (sct=0, sc=8) 00:36:10.767 starting I/O failed 00:36:10.767 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 [2024-12-07 01:03:26.853874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Read completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 Write completed with error (sct=0, sc=8) 00:36:10.768 starting I/O failed 00:36:10.768 [2024-12-07 01:03:26.854167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.768 [2024-12-07 01:03:26.854308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.854348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.854474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.854501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.854624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.854648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.854733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.854758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.854856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.854904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.855846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.855872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.768 qpair failed and we were unable to recover it. 00:36:10.768 [2024-12-07 01:03:26.856001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.768 [2024-12-07 01:03:26.856049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.856806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.856968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.857837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.857866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.858885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.858913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.859963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.859990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.860105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.860132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.769 qpair failed and we were unable to recover it. 00:36:10.769 [2024-12-07 01:03:26.860223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.769 [2024-12-07 01:03:26.860250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.860532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.860644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.860762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.860887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.860976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.861918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.861956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.862935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.862962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.863908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.770 [2024-12-07 01:03:26.863936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.770 qpair failed and we were unable to recover it. 00:36:10.770 [2024-12-07 01:03:26.864079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.864886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.864913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.865877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.865917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.866888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.866919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.867863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.867894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.771 [2024-12-07 01:03:26.868011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.771 [2024-12-07 01:03:26.868039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.771 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.868917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.868958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.869965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.870107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.870251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.870390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.870634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.870897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.870950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.871840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.871872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.872039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.772 [2024-12-07 01:03:26.872118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.772 [2024-12-07 01:03:26.872144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.772 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.872957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.872986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.873951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.873979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.874937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.874978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.875111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.875139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.773 qpair failed and we were unable to recover it. 00:36:10.773 [2024-12-07 01:03:26.875265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.773 [2024-12-07 01:03:26.875305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.875453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.875481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.875618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.875664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.875753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.875781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.875864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.875891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.876062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.876215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.876425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.876705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.876852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.876971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.877873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.877989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.878907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.878933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.879017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.879044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.879130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.879157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.879245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.879271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.879385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.774 [2024-12-07 01:03:26.879411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.774 qpair failed and we were unable to recover it. 00:36:10.774 [2024-12-07 01:03:26.879492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.879519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.879662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.879689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.879771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.879797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.879911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.879940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.880045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.880086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.880195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.880235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.880430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.880494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.880718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.880770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.880887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.880914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.881892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.881980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.882912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.882939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.775 qpair failed and we were unable to recover it. 00:36:10.775 [2024-12-07 01:03:26.883713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.775 [2024-12-07 01:03:26.883742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.883890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.883918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.885885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.885925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.886865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.886895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.887065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.887237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.887520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.887701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.887813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.888012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.888147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.888176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.776 [2024-12-07 01:03:26.888285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.776 [2024-12-07 01:03:26.888312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.776 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.888427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.888453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.888632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.888693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.888778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.888928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.888954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.889898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.890839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.890879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.891909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.891993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.892029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.892123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.892150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.777 qpair failed and we were unable to recover it. 00:36:10.777 [2024-12-07 01:03:26.892264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.777 [2024-12-07 01:03:26.892292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.892374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.892401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.892539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.892567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.892710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.892836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.892877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.893849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.893975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.894876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.894987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.895914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.895955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.896055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.896084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.896179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.778 [2024-12-07 01:03:26.896207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.778 qpair failed and we were unable to recover it. 00:36:10.778 [2024-12-07 01:03:26.896317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.896343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.896519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.896572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.896711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.896738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.896884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.896913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.897917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.897945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.898886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.898926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.779 qpair failed and we were unable to recover it. 00:36:10.779 [2024-12-07 01:03:26.899855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.779 [2024-12-07 01:03:26.899882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.900877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.900905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.901938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.901965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.902868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.902895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.903874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.903902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.780 qpair failed and we were unable to recover it. 00:36:10.780 [2024-12-07 01:03:26.904014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.780 [2024-12-07 01:03:26.904054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.904959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.905892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.905918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.906972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.907990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.781 qpair failed and we were unable to recover it. 00:36:10.781 [2024-12-07 01:03:26.908124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.781 [2024-12-07 01:03:26.908153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.908299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.908452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.908731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.908870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.908983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.782 [2024-12-07 01:03:26.909668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:10.782 qpair failed and we were unable to recover it. 00:36:10.782 [2024-12-07 01:03:26.909784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-07 01:03:26.909813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-07 01:03:26.909898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-07 01:03:26.909925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-07 01:03:26.910038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-07 01:03:26.910066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-07 01:03:26.910147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-07 01:03:26.910173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-07 01:03:26.910280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.070 [2024-12-07 01:03:26.910307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.070 qpair failed and we were unable to recover it. 00:36:11.070 [2024-12-07 01:03:26.910403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.910443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.910534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.910563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.910735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.910880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.910907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.911951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.911978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.912840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.912965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.913917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.914943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.914984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.915147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.915265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.915409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.915541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.071 [2024-12-07 01:03:26.915656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.071 qpair failed and we were unable to recover it. 00:36:11.071 [2024-12-07 01:03:26.915783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.915812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.915952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.915979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.916798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.916965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.917898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.917926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.918096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.918245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.918413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.918684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.918824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.918953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.919107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.919259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.919477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.919665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.919846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.919875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.920847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.920986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.921020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.921159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.921186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.921307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.921340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.921545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.921571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.921679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.072 [2024-12-07 01:03:26.921706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.072 qpair failed and we were unable to recover it. 00:36:11.072 [2024-12-07 01:03:26.921822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.921849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.921960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.921987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.922876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.923885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.923912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.924931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.924971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.925843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.925975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.926912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.926939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.927060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.073 [2024-12-07 01:03:26.927089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.073 qpair failed and we were unable to recover it. 00:36:11.073 [2024-12-07 01:03:26.927206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.927315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.927482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.927737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.927907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.927935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.928829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.928960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.929991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.930884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.930915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.931873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.931901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.932016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.932043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.932156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.932183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.074 [2024-12-07 01:03:26.932340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.074 [2024-12-07 01:03:26.932380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.074 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.932533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.932574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.932693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.932721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.932814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.932920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.932948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.933927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.934952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.934981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.935885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.935911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.075 [2024-12-07 01:03:26.936707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.075 [2024-12-07 01:03:26.936733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.075 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.936874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.936993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.937926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.937954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.938969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.938993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.939891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.939920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.940914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.940955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.941877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.941908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.942040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.942081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.942173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.942199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.076 [2024-12-07 01:03:26.942294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.076 [2024-12-07 01:03:26.942322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.076 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.942460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.942488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.942600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.942627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.942780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.942808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.942925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.942954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.943899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.943927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.944922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.944951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.945878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.945907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.946910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.946937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.947902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.947929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.077 qpair failed and we were unable to recover it. 00:36:11.077 [2024-12-07 01:03:26.948016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.077 [2024-12-07 01:03:26.948042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.948842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.948870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.949966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.949992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.950876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.950917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.951861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.951971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.952091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.952205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.952412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.952642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.952822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.952861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.078 qpair failed and we were unable to recover it. 00:36:11.078 [2024-12-07 01:03:26.953857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.078 [2024-12-07 01:03:26.953913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.953993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.954946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.954973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.955921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.955948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.956889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.956913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.957887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.957991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.958922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.958951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.079 qpair failed and we were unable to recover it. 00:36:11.079 [2024-12-07 01:03:26.959075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.079 [2024-12-07 01:03:26.959101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.959946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.959972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.960882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.961850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.961973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.962828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.962968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.080 qpair failed and we were unable to recover it. 00:36:11.080 [2024-12-07 01:03:26.963767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.080 [2024-12-07 01:03:26.963791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.963917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.963944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.964891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.964920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.965895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.965989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.966912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.966938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.967946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.967973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.081 [2024-12-07 01:03:26.968888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.081 qpair failed and we were unable to recover it. 00:36:11.081 [2024-12-07 01:03:26.968980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.969961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.970115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.970143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.970256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.970284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.970446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.970498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.970646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.970708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.970849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.970876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.971950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.971978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.972819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.972887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.973937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.973964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.082 [2024-12-07 01:03:26.974797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.082 qpair failed and we were unable to recover it. 00:36:11.082 [2024-12-07 01:03:26.974912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.974942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.975938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.975978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.976900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.976929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.977822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.977850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.978959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.979861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.979888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.980040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.980068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.980154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.980270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.980297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.083 qpair failed and we were unable to recover it. 00:36:11.083 [2024-12-07 01:03:26.980412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.083 [2024-12-07 01:03:26.980439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.980523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.980549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.980649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.980678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.980774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.980917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.981879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.981908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.982856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.982883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.983868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.983897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.084 [2024-12-07 01:03:26.984831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.084 qpair failed and we were unable to recover it. 00:36:11.084 [2024-12-07 01:03:26.984914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.984939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.985856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.985886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.986941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.986982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.987916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.987957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.988915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.988947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.989928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.989954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.085 [2024-12-07 01:03:26.990102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.085 qpair failed and we were unable to recover it. 00:36:11.085 [2024-12-07 01:03:26.990240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.990348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.990487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.990657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.990810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.990927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.990955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.991878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.991906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.992961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.992987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.993922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.993948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.994914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.994940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.995030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.995057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.995199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.995224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.995374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.086 [2024-12-07 01:03:26.995401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.086 qpair failed and we were unable to recover it. 00:36:11.086 [2024-12-07 01:03:26.995541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.995594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.995735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.995760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.995850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.995877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.995960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.995986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.996123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.996150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.996502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.996573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.996787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.996838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.996980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.997136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.997260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.997471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.997710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.997873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.997899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.998107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.998148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.998266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.998294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.998507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.998563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.998738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.998802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:26.999954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:26.999983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.000102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.000383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.000618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.000836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.000977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.001891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.001918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.002013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.087 [2024-12-07 01:03:27.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.087 [2024-12-07 01:03:27.002177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.087 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.002297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.002324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.002428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.002455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.002605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.002632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.002744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.002770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.002879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.002906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.003867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.003986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.004871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.004911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.005936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.005964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.006946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.006973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.007095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.007124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.007247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.007274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.088 qpair failed and we were unable to recover it. 00:36:11.088 [2024-12-07 01:03:27.007363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.088 [2024-12-07 01:03:27.007390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.007558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.007609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.007790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.007817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.007900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.007927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.008819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.009858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.009985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.010134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.010342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.010624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.010843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.010958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.010984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.011906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.011947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.012088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.012129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.012280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.012356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.012383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.012597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.012655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.089 qpair failed and we were unable to recover it. 00:36:11.089 [2024-12-07 01:03:27.012876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.089 [2024-12-07 01:03:27.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.013883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.013912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.014938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.014967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.015898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.015926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.016047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.016417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.016699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.016867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.016980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.017156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.017503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.017734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.017914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.017941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.018881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.018907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.090 qpair failed and we were unable to recover it. 00:36:11.090 [2024-12-07 01:03:27.019029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.090 [2024-12-07 01:03:27.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.019216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.019244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.019385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.019413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.019595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.019654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.019744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.019770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.019882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.019908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.020918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.020944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.021908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.021935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.022943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.022969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.023896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.023923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.024017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.024045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.024160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.024192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.024279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.024306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.024392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.024420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.091 [2024-12-07 01:03:27.024567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.091 [2024-12-07 01:03:27.024594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.091 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.024687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.024714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.024821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.024930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.024957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.025840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.025880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.026846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.026974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.027877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.027992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.028947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.028973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.029150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.029266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.029436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.029615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.029887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.029977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.030012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.030109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.030137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.030307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.030359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.092 [2024-12-07 01:03:27.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.092 [2024-12-07 01:03:27.030573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.092 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.030750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.030803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.030940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.030966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.031866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.031975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.032855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.032883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.033855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.033969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.034893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.034934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.035062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.035091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.035241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.035272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.035386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.035413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.035525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.035552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.093 [2024-12-07 01:03:27.035672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.093 [2024-12-07 01:03:27.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.093 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.035814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.035852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.036842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.036869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.037991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.038894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.038919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.039864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.039891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.040836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.040863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.041005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.041032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.094 qpair failed and we were unable to recover it. 00:36:11.094 [2024-12-07 01:03:27.041146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.094 [2024-12-07 01:03:27.041173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.041295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.041450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.041569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.041715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.041883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.041992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.042853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.042881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.043924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.043951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.044911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.044988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.045954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.045981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.046067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.046097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.046216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.046243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.046427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.046492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.095 [2024-12-07 01:03:27.046588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.095 [2024-12-07 01:03:27.046615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.095 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.046731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.046758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.046898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.046938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.047174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.047281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.047502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.047732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.047866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.047976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.048898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.049138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.049395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.049606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.049769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.049883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.049910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.050937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.050964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.051851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.051877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.052047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.052139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.052166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.052277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.052304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.052453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.052481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.096 [2024-12-07 01:03:27.052597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.096 [2024-12-07 01:03:27.052624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.096 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.052738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.052765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.052929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.053885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.053970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.054958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.054985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.055933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.055960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.056864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.056982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.097 qpair failed and we were unable to recover it. 00:36:11.097 [2024-12-07 01:03:27.057917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.097 [2024-12-07 01:03:27.057943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.058983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.059891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.059917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.060944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.060971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.061959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.061987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.062087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.062112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.062242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.062348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.062375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.062459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.098 [2024-12-07 01:03:27.062484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.098 qpair failed and we were unable to recover it. 00:36:11.098 [2024-12-07 01:03:27.062579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.062606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.062684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.062713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.062802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.062829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.062942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.062969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.063929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.063960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.064077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.064113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.064280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.064331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.064502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.064553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.064708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.064764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.064870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.064897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.065848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.065879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.066909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.066935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.067025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.067061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.067232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.067289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.067445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.067500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.067663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.067714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.067858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.067885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.068060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.068240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.068531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.068765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.099 [2024-12-07 01:03:27.068880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.099 qpair failed and we were unable to recover it. 00:36:11.099 [2024-12-07 01:03:27.068970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.069901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.069928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.070959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.070987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.071889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.071916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.072044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.072072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.072268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.072321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.072462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.072508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.072696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.072756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.072867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.072894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.073945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.073972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.074107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.074134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.074276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.074303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.074443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.074470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.074563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.100 [2024-12-07 01:03:27.074589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.100 qpair failed and we were unable to recover it. 00:36:11.100 [2024-12-07 01:03:27.074682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.074709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.074827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.074853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.075885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.076870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.077878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.078870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.078896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.079047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.079192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.079482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.079695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.079843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.079983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.101 [2024-12-07 01:03:27.080027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.101 qpair failed and we were unable to recover it. 00:36:11.101 [2024-12-07 01:03:27.080120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.080146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.080312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.080365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.080541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.080605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.080779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.080835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.080981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.081149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.081303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.081544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.081775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.081987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.082833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.082859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.083916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.083942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.084894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.085891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.102 [2024-12-07 01:03:27.085932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.102 qpair failed and we were unable to recover it. 00:36:11.102 [2024-12-07 01:03:27.086036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.086826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.086854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.087875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.087990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.088948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.088976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.089854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.089968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.090003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.090150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.090204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.090363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.090413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.090643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.090698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.103 [2024-12-07 01:03:27.090839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.103 qpair failed and we were unable to recover it. 00:36:11.103 [2024-12-07 01:03:27.090925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.090951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.091854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.091880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.092921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.092948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.093176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.093239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.093328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.093355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.093518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.093569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.093752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.093809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.093950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.093979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.094921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.094952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.095934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.095962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.096093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.096122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.096264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.096291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.096431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.096458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.096582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.104 [2024-12-07 01:03:27.096669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.104 [2024-12-07 01:03:27.096697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.104 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.096812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.096839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.096980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.097943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.097971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.098874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.099119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.099300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.099413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.099534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.099848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.100876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.100972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.101898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.101925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.102065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.102092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.105 [2024-12-07 01:03:27.102204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.105 [2024-12-07 01:03:27.102231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.105 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.102929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.102955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.103939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.104838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.104864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.105894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.105921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.106913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.106938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.107068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.107097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.107190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.107216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.107307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.107333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.106 [2024-12-07 01:03:27.107452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.106 [2024-12-07 01:03:27.107478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.106 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.107554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.107579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.107659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.107684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.107770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.107798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.107874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.108950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.108978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.109914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.109941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.110890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.110917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.107 [2024-12-07 01:03:27.111602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.107 [2024-12-07 01:03:27.111637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.107 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.111759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.111786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.111872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.111899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.111987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.112875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.112984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.113945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.113970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.114872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.114985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.115909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.115935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.116068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.116095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.116185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.116214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.116333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.116360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-12-07 01:03:27.116480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.108 [2024-12-07 01:03:27.116506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.116619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.116645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.116761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.116789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.116912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.116939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.117940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.117968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.118072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.118099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.118189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.118218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.118372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.118400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.118518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.118599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.118877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.118943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.119119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.119147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.119246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.119273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.119421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.119486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.119729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.119795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.120070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.120098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.120185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.120211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.120361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.120387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.120500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.120573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.120854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.121892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.121919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-12-07 01:03:27.122669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.109 [2024-12-07 01:03:27.122696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.122781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.122808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.122898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.122926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.123891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.123918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.124002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.124028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.124143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.124202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.124351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.124425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.124629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.124700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.124892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.124921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.125903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.125929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.126899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.126926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-12-07 01:03:27.127777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.110 [2024-12-07 01:03:27.127817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.127908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.127937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.128930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.128957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.129971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.130937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.130964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.131873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.131902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.132926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.132954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.133086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.133113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.111 qpair failed and we were unable to recover it. 00:36:11.111 [2024-12-07 01:03:27.133199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.111 [2024-12-07 01:03:27.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.133312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.133337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.133457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.133485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.133607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.133634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.133722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.133751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.133868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.133895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.134871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.134897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.135852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.135977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.136961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.137946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.112 [2024-12-07 01:03:27.137976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.112 qpair failed and we were unable to recover it. 00:36:11.112 [2024-12-07 01:03:27.138080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.138203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.138395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.138621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.138787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.138901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.138929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.139895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.139923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.140854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.140954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.141966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.141993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.142942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.142969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.143930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.143969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.113 qpair failed and we were unable to recover it. 00:36:11.113 [2024-12-07 01:03:27.144082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.113 [2024-12-07 01:03:27.144110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.144885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.144925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.145928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.145956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.146928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.146954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.147930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.147964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.148921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.148950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.149098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.149293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.149354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.149615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.149672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.149811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.114 [2024-12-07 01:03:27.149916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.114 [2024-12-07 01:03:27.149943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.114 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.150065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.150105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.150233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.150270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.150490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.150553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.150739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.151951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.151981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.152877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.153937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.153964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.154806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.154859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.155884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.155912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.156001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.156027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.156151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.156320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.156441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.115 [2024-12-07 01:03:27.156499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.115 qpair failed and we were unable to recover it. 00:36:11.115 [2024-12-07 01:03:27.156660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.156717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.156873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.156900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.157718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.157778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.158792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.159956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.159984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.160963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.160990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.161089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.161117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.161236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.161267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.161393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.161420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.161612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.161669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.161873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.161929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.162114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.162141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.162247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.162274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.162389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.162416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.116 qpair failed and we were unable to recover it. 00:36:11.116 [2024-12-07 01:03:27.162499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.116 [2024-12-07 01:03:27.162526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.162622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.162649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.162727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.162874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.162933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.163946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.163973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.164869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.164894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.117 [2024-12-07 01:03:27.165671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.117 [2024-12-07 01:03:27.165698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.117 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.165797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.165826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.165970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.166837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.166864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.167880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.167907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.168926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.168953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.169815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.169963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.170139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.170288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.170430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.170630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.170877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.170926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.171116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.118 qpair failed and we were unable to recover it. 00:36:11.118 [2024-12-07 01:03:27.171238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.118 [2024-12-07 01:03:27.171268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.171389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.171521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.171583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.171667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.171694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.171773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.171800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.171882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.171907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.172873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.172969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.173917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.173944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.174903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153e5f0 is same with the state(6) to be set 00:36:11.119 [2024-12-07 01:03:27.175026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.175931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.175980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.176131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.119 [2024-12-07 01:03:27.176278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.119 qpair failed and we were unable to recover it. 00:36:11.119 [2024-12-07 01:03:27.176425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.176482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.176600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.176658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.176765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.176791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.176918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.177945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.178937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.178977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.179935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.179962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.180924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.180951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.181048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.181076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.181167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.181194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.181309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.120 [2024-12-07 01:03:27.181336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.120 qpair failed and we were unable to recover it. 00:36:11.120 [2024-12-07 01:03:27.181473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.181499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.181614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.181641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.181725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.181752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.181838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.181864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.181950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.181977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.182957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.182984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.183909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.183949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.184915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.184942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.185033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.185060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.185151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.121 [2024-12-07 01:03:27.185178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.121 qpair failed and we were unable to recover it. 00:36:11.121 [2024-12-07 01:03:27.185285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.185325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.185418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.185447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.185533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.185561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.185664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.185723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.185886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.185913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.186904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.186933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.187939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.188071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.188098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.188181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.188208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.188688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.188731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.188858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.188896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.188988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.189140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.189167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.189255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.122 [2024-12-07 01:03:27.189283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.122 qpair failed and we were unable to recover it. 00:36:11.122 [2024-12-07 01:03:27.189371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.123 [2024-12-07 01:03:27.189398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.123 qpair failed and we were unable to recover it. 00:36:11.123 [2024-12-07 01:03:27.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.123 [2024-12-07 01:03:27.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.123 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.189656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.189707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.189798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.189825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.189943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.189971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.190057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.190085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.190167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.190194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.190282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.190397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.425 [2024-12-07 01:03:27.190424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.425 qpair failed and we were unable to recover it. 00:36:11.425 [2024-12-07 01:03:27.190542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.190568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.190655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.190682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.190767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.190794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.190884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.190925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.191865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.191911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.192844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.192901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.193837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.193972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.194919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.194945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.426 qpair failed and we were unable to recover it. 00:36:11.426 [2024-12-07 01:03:27.195814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.426 [2024-12-07 01:03:27.195841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.195957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.195984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.196905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.197841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.197988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.198885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.198987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.199966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.200765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.200965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.201031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.201172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.201288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.427 [2024-12-07 01:03:27.201316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.427 qpair failed and we were unable to recover it. 00:36:11.427 [2024-12-07 01:03:27.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.201424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.201625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.201678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.201789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.201835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.201948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.201975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.202841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.202967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.203939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.203966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.204900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.204928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.205866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.205962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.206008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.206101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.206129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.428 [2024-12-07 01:03:27.206239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.428 [2024-12-07 01:03:27.206266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.428 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.206343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.206371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.206536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.206572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.206686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.206713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.206822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.206849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.206987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.207857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.207894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.208843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.209877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.209907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.210884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.429 qpair failed and we were unable to recover it. 00:36:11.429 [2024-12-07 01:03:27.211902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.429 [2024-12-07 01:03:27.211929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.212879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.212917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.213780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.213821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.214897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.214937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.215878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.215935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.216144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.216293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.216434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.216640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.216836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.430 [2024-12-07 01:03:27.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.430 [2024-12-07 01:03:27.217010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.430 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.217877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.217904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.218896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.218923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.219943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.219970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.220887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.221961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.221990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.222107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.222147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.222308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.222337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.431 qpair failed and we were unable to recover it. 00:36:11.431 [2024-12-07 01:03:27.222458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.431 [2024-12-07 01:03:27.222495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.222660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.222696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.222883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.222919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.223888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.223971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.224962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.224990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.225959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.225987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.226095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.226136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.226243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.226289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.226443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.226467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.226652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.226710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.226883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.226922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.227828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.227867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.228024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.228072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.228196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.228225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.432 qpair failed and we were unable to recover it. 00:36:11.432 [2024-12-07 01:03:27.228344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.432 [2024-12-07 01:03:27.228371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.228480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.228519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.228662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.228713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.228856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.228894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.229925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.229961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.230873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.230907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.231924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.231951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.232833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.233884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.433 [2024-12-07 01:03:27.233917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.433 qpair failed and we were unable to recover it. 00:36:11.433 [2024-12-07 01:03:27.234048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.234213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.234465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.234618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.234835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.234895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.235877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.235991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.236804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.236974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.237953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.237983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.238901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.238942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.239047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.239076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.239178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.239218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.239319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.239348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.434 qpair failed and we were unable to recover it. 00:36:11.434 [2024-12-07 01:03:27.239465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.434 [2024-12-07 01:03:27.239493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.239580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.239745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.239786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.239942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.239968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.240873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.240908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.241927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.242816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.242852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.243900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.243935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.244121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.244272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.244661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.244849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.435 [2024-12-07 01:03:27.244961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.435 [2024-12-07 01:03:27.245008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.435 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.245827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.246903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.246990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.247842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.247882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.248955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.248981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.436 [2024-12-07 01:03:27.249877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.436 qpair failed and we were unable to recover it. 00:36:11.436 [2024-12-07 01:03:27.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.250933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.250961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.251846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.251887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.252947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.252973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.253947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.253976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.254914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.254941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.255071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.255100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.255211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.255239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.255351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.255378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.437 qpair failed and we were unable to recover it. 00:36:11.437 [2024-12-07 01:03:27.255466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.437 [2024-12-07 01:03:27.255493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.255585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.255613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.255705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.255732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.255806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.255833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.255946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.255978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.256946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.256975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.257949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.257974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.258857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.258983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.259908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.259935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.260052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.260175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.260202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.260342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.260368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.260460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.260489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.438 qpair failed and we were unable to recover it. 00:36:11.438 [2024-12-07 01:03:27.260578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.438 [2024-12-07 01:03:27.260607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.260691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.260804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.260831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.260926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.260954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.261894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.261921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.262954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.262982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.263930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.263966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.264947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.264974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-07 01:03:27.265075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.439 [2024-12-07 01:03:27.265115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.265228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.265268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.265385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.265414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.265525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.265564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.265746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.265794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.265925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.265954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.266934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.266963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.267881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.268960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.268988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.269936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.269962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.270077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.270119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.270261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.270349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.270378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-07 01:03:27.270578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.440 [2024-12-07 01:03:27.270606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.270686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.270714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.270795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.270915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.270947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.271897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.271923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.272963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.272989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.273976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.274929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.275014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.275042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.275132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.275159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.275242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.275270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-07 01:03:27.275361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.441 [2024-12-07 01:03:27.275389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.275476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.275503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.275611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.275641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.275736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.275761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.275901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.275990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.276901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.276929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.277903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.277933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.278912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.278943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.279852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.279892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.280025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.280054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.280149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.280177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.280267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.280294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.280424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.442 [2024-12-07 01:03:27.280462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.442 qpair failed and we were unable to recover it. 00:36:11.442 [2024-12-07 01:03:27.280676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.280715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.280826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.280956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.280986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.281898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.282850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.282895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.283904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.283941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.284889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.284917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.285934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.285962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.286069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.286096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.286178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.286204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.286336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.286380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.286486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.443 [2024-12-07 01:03:27.286517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.443 qpair failed and we were unable to recover it. 00:36:11.443 [2024-12-07 01:03:27.286618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.286645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.286765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.286793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.286886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.286914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.287922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.287952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.288982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.289846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.289971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.290916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.290945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.291039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.291069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.291185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.291213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.444 qpair failed and we were unable to recover it. 00:36:11.444 [2024-12-07 01:03:27.291366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.444 [2024-12-07 01:03:27.291422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.291549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.291588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.291715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.291745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.291888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.291915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.292879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.292988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.293884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.293913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.294927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.294954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.295886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.295913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.296830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.296918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.297037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.445 [2024-12-07 01:03:27.297075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.445 qpair failed and we were unable to recover it. 00:36:11.445 [2024-12-07 01:03:27.297188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.297474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.297587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.297728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.297883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.297928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.298849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.298877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.299934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.299963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.300092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.300119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.300232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.300259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.300392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.300422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.300562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.300606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.300798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.300863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.301959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.301986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.446 [2024-12-07 01:03:27.302917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.446 qpair failed and we were unable to recover it. 00:36:11.446 [2024-12-07 01:03:27.303032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.303059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.303147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.303173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.303346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.303391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.303551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.303582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.303740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.303801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.303985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.304287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.304612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.304827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.304878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.305851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.305902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.306934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.306961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.307959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.308919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.308947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.309051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.309080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.447 [2024-12-07 01:03:27.309162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.447 [2024-12-07 01:03:27.309190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.447 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.309306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.309335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.309468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.309519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.309608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.309635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.309800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.309851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.310905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.310932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.311924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.311953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.312855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.312955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.313141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.313260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.313378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.313585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.313881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.313933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.314156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.314184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.314271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.314299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.314544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.448 [2024-12-07 01:03:27.314594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.448 qpair failed and we were unable to recover it. 00:36:11.448 [2024-12-07 01:03:27.314782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.314850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.315854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.315973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.316938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.316966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.317942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.317969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.318883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.319919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.319960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.320117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.320156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.320280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.320309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.320428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.320455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.320586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.449 [2024-12-07 01:03:27.320612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.449 qpair failed and we were unable to recover it. 00:36:11.449 [2024-12-07 01:03:27.320753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.320780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.320892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.320919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.321791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.321843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.322138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.322279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.322305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.322404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.322451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.322621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.322687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.322887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.322941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.323866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.323895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.324935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.324974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.325867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.325990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.326119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.326147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.326293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.326459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.326489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.326618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.450 [2024-12-07 01:03:27.326664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.450 qpair failed and we were unable to recover it. 00:36:11.450 [2024-12-07 01:03:27.326844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.326898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.327013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.327041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.327146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.327173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.327290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.327317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.327511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.327577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.327811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.327860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.328015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.328061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.328214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.328254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.328421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.328484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.328640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.328694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.328854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.328907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.329947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.330834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.330874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.331854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.331980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.332129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.332278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.332422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.332724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.332930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.332957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.451 [2024-12-07 01:03:27.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.451 [2024-12-07 01:03:27.333111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.451 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.333198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.333342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.333370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.333481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.333507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.333671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.333701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.333883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.333913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.334874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.334904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.335885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.335988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.336826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.336960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.337929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.337955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.338958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.338986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.452 [2024-12-07 01:03:27.339088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.452 [2024-12-07 01:03:27.339116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.452 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.339256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.339283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.339397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.339425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.339536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.339563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.339680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.339707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.339868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.339898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.340878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.340908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.341890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.341936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.342108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.342219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.342248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.342416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.342558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.342620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.342795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.342854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.343873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.343900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.344044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.344124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.344151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.344289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.344322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.344496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.344550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.453 [2024-12-07 01:03:27.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.453 [2024-12-07 01:03:27.344783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.453 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.344923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.344950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.345832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.345873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.346030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.346071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.346193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.346222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.346491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.346664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.346727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.346894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.346926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.347087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.347244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.347274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.347388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.347416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.347528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.347585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.347915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.348930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.348959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.349889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.349920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.350856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.350986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.454 [2024-12-07 01:03:27.351021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.454 qpair failed and we were unable to recover it. 00:36:11.454 [2024-12-07 01:03:27.351135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.351277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.351388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.351685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.351934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.352898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.352956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.353909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.353935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.354895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.354923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.355880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.355907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.356027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.356056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.356147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.356174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.356266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.455 [2024-12-07 01:03:27.356295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.455 qpair failed and we were unable to recover it. 00:36:11.455 [2024-12-07 01:03:27.356389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.356421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.356507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.356534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.356618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.356646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.356722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.356749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.356875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.356916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.357863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.357941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.358919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.358957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.359879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.359992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.360144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.360290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.360611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.360781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.360921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.360950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.456 [2024-12-07 01:03:27.361840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.456 qpair failed and we were unable to recover it. 00:36:11.456 [2024-12-07 01:03:27.361931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.361959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.362935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.362962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.363908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.363940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.364896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.364923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.365910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.365986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.366133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.366270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.457 qpair failed and we were unable to recover it. 00:36:11.457 [2024-12-07 01:03:27.366788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.457 [2024-12-07 01:03:27.366816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.366957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.366985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.367866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.367978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.368870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.369949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.369976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.370899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.370927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.371010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.371040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.371138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.371179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.371311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.371424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.458 [2024-12-07 01:03:27.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.458 qpair failed and we were unable to recover it. 00:36:11.458 [2024-12-07 01:03:27.371539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.371566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.371652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.371680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.371792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.371919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.371945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.372896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.373886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.373982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.374907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.374986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.375868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.375916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.376893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.459 [2024-12-07 01:03:27.376920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.459 qpair failed and we were unable to recover it. 00:36:11.459 [2024-12-07 01:03:27.377004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.377917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.377942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.378954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.378981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.379931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.379957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.380918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.460 [2024-12-07 01:03:27.381759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.460 qpair failed and we were unable to recover it. 00:36:11.460 [2024-12-07 01:03:27.381856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.381885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.381974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.382843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.382872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.383883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.383977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.384903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.384932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.385911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.385939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.386843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.461 [2024-12-07 01:03:27.386870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.461 qpair failed and we were unable to recover it. 00:36:11.461 [2024-12-07 01:03:27.387005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.387139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.387308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.387435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.387639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.387828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.387887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.388063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.388091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.388203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.388229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.388305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.388331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.388466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.388497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.388757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.388812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.389941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.389967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.390913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.391853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.391888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.392041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.392080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.392168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.392333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.462 [2024-12-07 01:03:27.392360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.462 qpair failed and we were unable to recover it. 00:36:11.462 [2024-12-07 01:03:27.392532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.392587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.392754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.392807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.393879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.394963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.394992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.395089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.395307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.395467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.395523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.395719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.395770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.395893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.396048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.396076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.396186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.396213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.396317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.396361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.396631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.396696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.396896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.396927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.397882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.398009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.398036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.398174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.398208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.398339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.398379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.398530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.398559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.463 [2024-12-07 01:03:27.398736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.463 [2024-12-07 01:03:27.398793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.463 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.398946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.398976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.399140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.399295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.399445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.399623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.399827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.399955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.400923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.400953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.401122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.401266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.401403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.401591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.401850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.401983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.402029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.402156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.402315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.402343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.402463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.402491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.402706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.402787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.402988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.403164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.403333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.403647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.403847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.403899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.404124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.404165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.404320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.404352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.404464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.404492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.404673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.404729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.404851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.404881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.405024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.405052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.405195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.405222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.405305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.405332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.464 [2024-12-07 01:03:27.405480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.464 [2024-12-07 01:03:27.405513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.464 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.405630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.405863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.406738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.406765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.407938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.407991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.408109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.408137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.408248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.408276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.408412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.408439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.408582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.408722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.408750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.409015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.409072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.409168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.409195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.409302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.409329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.409489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.409556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.409764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.409849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.410839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.465 qpair failed and we were unable to recover it. 00:36:11.465 [2024-12-07 01:03:27.411895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.465 [2024-12-07 01:03:27.411922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.412930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.412958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.413934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.414881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.414922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.415943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.415973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.416934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.416961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.417070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.417111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.417233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.417263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.466 qpair failed and we were unable to recover it. 00:36:11.466 [2024-12-07 01:03:27.417357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.466 [2024-12-07 01:03:27.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.417503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.417558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.417808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.417946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.417973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.418866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.418896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.419831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.419861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.420921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.420948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.421084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.421126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.421286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.421412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.421607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.421661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.421829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.421889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.422928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.422955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.423060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.467 [2024-12-07 01:03:27.423088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.467 qpair failed and we were unable to recover it. 00:36:11.467 [2024-12-07 01:03:27.423227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.423254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.423505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.423557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.423830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.423897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.424953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.424980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.425964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.425991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.426927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.426955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.427093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.427122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.427274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.427314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.427421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.427461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.427672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.427753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.427966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.428903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.428969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.429157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.429184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.429327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.429354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.429551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.468 [2024-12-07 01:03:27.429578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.468 qpair failed and we were unable to recover it. 00:36:11.468 [2024-12-07 01:03:27.429727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.429754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.429927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.430871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.431020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.431050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.431190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.431419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.431473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.431685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.431737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.431887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.431914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.432863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.432928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.433108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.433135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.433223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.433247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.433392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.433420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.433543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.433607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.433890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.433956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.434126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.434155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.434305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.434336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.434450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.434477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.434623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.434650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.434879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.434943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.435184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.435469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.435578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.435744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.435972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.436060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.436171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.436198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.436313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.436339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.436454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.436480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.436567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.469 [2024-12-07 01:03:27.436591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.469 qpair failed and we were unable to recover it. 00:36:11.469 [2024-12-07 01:03:27.436850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.436915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.437868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.437898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.438816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.438966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.439846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.439874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.440930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.440957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.441134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.441255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.441538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.441677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.441937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.442158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.442437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.442599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.442808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.442874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.443082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.443110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.443248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.443294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.443478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.443560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.470 [2024-12-07 01:03:27.443900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.470 [2024-12-07 01:03:27.443967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.470 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.444153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.444180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.444292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.444318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.444460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.444486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.444599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.444666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.444962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.445162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.445307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.445461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.445693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.445938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.445967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.446132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.446159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.446274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.446301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.446422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.446449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.446538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.446566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.446714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.447833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.447897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.448890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.449061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.449107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.449237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.449270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.449476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.449537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.449715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.449774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.449924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.449956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.450128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.450187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.450335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.450365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.450564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.450624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.450842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.471 [2024-12-07 01:03:27.450898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.471 qpair failed and we were unable to recover it. 00:36:11.471 [2024-12-07 01:03:27.451043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.451072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.451212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.451239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.451380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.451406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.451567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.451633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.451915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.451967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.452142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.452169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.452293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.452330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.452471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.452615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.452640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.452845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.452909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.453130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.453158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.453301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.453333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.453441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.453466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.453582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.453615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.453771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.453824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.454053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.454080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.454225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.454255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.454378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.454435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.454604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.454676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.454896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.454949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.455098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.455265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.455492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.455673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.455824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.455974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.472 [2024-12-07 01:03:27.456015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.472 qpair failed and we were unable to recover it. 00:36:11.472 [2024-12-07 01:03:27.456173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.456201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.456317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.456344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.456440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.456467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.456602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.456634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.456794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.456824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.456917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.457179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.457287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.457429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.457569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.457804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.457870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.458077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.458104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.458216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.458243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.458376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.458418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.458599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.458669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.458881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.458935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.459931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.459961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.460129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.460158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.460246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.460271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.460356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.460381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.460521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.460546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.460790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.460856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.461069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.461096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.461209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.461236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.461417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.461487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.461780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.461864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.462923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.462953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.463073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.463103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.463217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.463244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.463364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.463390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.473 qpair failed and we were unable to recover it. 00:36:11.473 [2024-12-07 01:03:27.463533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.473 [2024-12-07 01:03:27.463559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.463855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.464066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.464094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.464206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.464233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.464429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.464494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.464820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.464885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.465834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.465899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.466109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.466137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.466250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.466275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.466387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.466415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.466526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.466553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.466710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.466775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.467072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.467100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.467214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.467241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.467387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.467413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.467625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.467670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.467982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.468163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.468324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.468464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.468570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.468785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.468861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.469020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.469078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.469199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.469228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.469457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.469511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.469664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.469880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.469912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.470926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.470956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.471114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.471154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.471247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.474 [2024-12-07 01:03:27.471275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.474 qpair failed and we were unable to recover it. 00:36:11.474 [2024-12-07 01:03:27.471441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.471502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.471668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.471723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.471835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.471861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.472051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.472226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.472437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.472734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.472868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.472988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.473149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.473255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.473402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.473648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.473891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.473923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.474091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.474263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.474451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.474659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.474880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.474977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.475876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.475916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.476842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.476988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.477051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.477230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.477274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.477502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.477552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.477839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.477970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.478003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.478138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.478229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.478256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.478428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.478588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.475 [2024-12-07 01:03:27.478618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.475 qpair failed and we were unable to recover it. 00:36:11.475 [2024-12-07 01:03:27.478748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.478777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.478896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.478926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.479835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.479984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.480024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.480183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.480210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.480337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.480378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.480575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.480624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.480792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.480859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.481935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.481960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.482088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.482122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.482282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.482429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.482497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.482756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.482837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.483038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.483081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.483201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.483228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.483463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.483525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.483775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.483835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.484880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.484910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.485012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.485057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.476 [2024-12-07 01:03:27.485146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.476 [2024-12-07 01:03:27.485174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.476 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.485301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.485330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.485418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.485446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.485597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.485658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.485930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.485991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.486933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.486960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.487959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.487985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.488929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.488970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.489866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.489911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.490108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.490256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.490412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.490593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.490820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.490976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.491140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.491285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.491462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.491683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.491859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.491885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.492014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.492042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.477 [2024-12-07 01:03:27.492160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.477 [2024-12-07 01:03:27.492187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.477 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.492287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.492425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.492572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.492856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.492987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.493896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.493923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.494854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.494982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.495036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.495181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.495300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.495344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.495573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.495624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.495839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.495891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.496932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.496958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.497876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.497902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.498039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.498067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.498195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.498235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.498356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.498385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.498470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.498497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.478 [2024-12-07 01:03:27.498585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.478 [2024-12-07 01:03:27.498612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.478 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.498724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.498752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.498833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.498859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.498976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.499917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.499948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.500811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.500842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.501821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.501977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.502819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.502976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.503961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.503989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.504115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.504369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.504536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.504705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.504933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.505010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.479 [2024-12-07 01:03:27.505124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.479 [2024-12-07 01:03:27.505153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.479 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.505273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.505300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.505464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.505494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.505719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.505777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.505905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.506952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.506982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.507090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.507118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.507264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.507378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.507406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.507651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.507705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.507831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.507876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.508095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.508221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.508340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.508487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.508814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.508989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.509023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.509132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.509158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.509252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.509313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.509610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.509674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.509861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.509893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.510047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.510075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.510212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.510239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.510400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.510462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.510613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.510675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.510863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.510892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.511058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.511085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.511220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.511367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.511394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.511540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.480 [2024-12-07 01:03:27.511572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.480 qpair failed and we were unable to recover it. 00:36:11.480 [2024-12-07 01:03:27.511700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.511729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.511834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.511863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.511957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.511987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.512877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.512907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.513961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.513991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.514909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.514939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.515259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.515512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.515761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.515904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.515985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.516098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.516302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.516489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.516761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.516925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.516954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.517104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.517132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.517209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.517236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.517499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.517701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.517753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.517908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.517937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.518084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.518111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.518204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.518232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.518350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.481 [2024-12-07 01:03:27.518377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.481 qpair failed and we were unable to recover it. 00:36:11.481 [2024-12-07 01:03:27.518530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.518594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.518717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.518747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.518859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.518889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.519932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.519961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.520105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.520132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.520278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.520318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.520466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.520546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.520846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.520912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.521357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.521641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.521793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.521956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.522894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.522934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.523936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.523962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.524089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.524117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.524284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.524401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.524428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.482 [2024-12-07 01:03:27.524523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.482 [2024-12-07 01:03:27.524550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.482 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.524689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.524717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.524843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.524893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.524983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.525849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.525975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.526041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.526146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.526178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.526280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.526311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.526579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.526614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.526728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.526773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.526984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.527166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.527286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.527463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.527906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.527970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.528175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.528202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.528342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.528368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.528458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.528636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.528671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.528922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.529133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.529160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.529279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.529305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.529418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.529445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.529559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.529586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.529774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.529840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.530845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.530919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.531101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.531128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.531248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.531279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.531439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.531471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.483 qpair failed and we were unable to recover it. 00:36:11.483 [2024-12-07 01:03:27.531664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.483 [2024-12-07 01:03:27.531714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.531951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.531980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.532146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.532251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.532302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.532424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.532471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.532699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.532754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.532867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.532895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.533017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.533050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.533157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.533197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.533367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.782 [2024-12-07 01:03:27.533395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.782 qpair failed and we were unable to recover it. 00:36:11.782 [2024-12-07 01:03:27.533511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.533538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.533721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.533787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.534908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.534973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.535130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.535157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.535293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.535322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.535454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.535521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.535715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.535985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.536023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.536134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.536161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.536277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.536303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.536463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.536495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.536749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.536813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.537910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.537977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.538158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.538184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.538342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.538371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.538526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.538747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.538813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.538963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.538992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.539135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.539161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.539249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.539276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.539439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.539578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.539620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.539745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.539774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.540101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.540227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.540253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.540345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.540372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.783 [2024-12-07 01:03:27.540590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.783 [2024-12-07 01:03:27.540619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.783 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.540742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.540807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.541084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.541195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.541305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.541697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.542148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.542411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.542556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.542694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.542727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.543016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.543072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.543180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.543206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.543351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.543385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.543664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.543727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.543967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.544186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.544214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.544431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.544464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.544592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.544625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.544866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.544917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.545323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.545357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.545620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.545685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.545873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.545937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.546175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.546211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.546372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.546436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.546682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.547006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.547160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.547192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.547301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.547334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.547548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.547615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.547856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.547921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.548158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.548192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.548335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.548406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.548602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.548666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.548949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.784 [2024-12-07 01:03:27.549046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.784 qpair failed and we were unable to recover it. 00:36:11.784 [2024-12-07 01:03:27.549194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.549228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.549424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.549488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.549740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.549805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.550078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.550113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.550261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.550319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.550612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.550745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.550778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.550921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.550953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.551101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.551134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.551241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.551280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.551547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.551614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.551824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.551891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.552094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.552129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.552320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.552353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.552490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.552522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.552714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.552776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.552988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.553074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.553216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.553249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.553389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.553423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.553624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.553689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.553984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.554028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.554161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.554194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.554410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.554475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.554723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.554790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.555087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.555122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.555237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.555285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.555523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.555587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.555831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.555895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.556148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.556184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.556323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.556403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.556670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.556735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.557011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.557078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.557377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.557449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.557710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.557741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.557904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.557963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.558280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.558346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.558636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.558702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.558966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.785 [2024-12-07 01:03:27.559057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.785 qpair failed and we were unable to recover it. 00:36:11.785 [2024-12-07 01:03:27.559268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.559335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.559590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.559630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.559733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.559769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.559961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.560139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.560172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.560447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.560480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.560614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.560647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.560834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.560899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.561181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.561249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.561564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.561629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.561866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.561937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.562228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.562295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.562606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.562671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.562937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.563021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.563285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.563350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.563613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.563679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.563976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.564069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.564315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.564378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.564624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.564691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.564991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.565039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.565189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.565258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.565477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.565542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.565834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.566211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.566441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.566506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.566755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.566818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.567030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.567117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.567325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.567385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.567525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.567559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.567755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.567819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.567982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.568025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.568335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.568399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.568654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.568719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.569010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.569046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.569216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.569250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.569496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.569528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.569662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.786 [2024-12-07 01:03:27.569702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.786 qpair failed and we were unable to recover it. 00:36:11.786 [2024-12-07 01:03:27.569934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.570012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.570230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.570296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.570537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.570601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.570855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.570921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.571240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.571316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.571615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.571680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.571929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.572297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.572363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.572682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.572981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.573067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.573442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.573730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.573763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.573899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.573932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.574144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.574211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.574455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.574520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.574728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.574792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.575046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.575312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.575379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.575642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.575706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.575879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.575944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.576216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.576281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.576575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.576639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.576863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.576927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.577228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.577264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.577416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.577450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.577761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.578044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.578112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.578404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.578468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.578676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.578740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.579011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.579077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.579364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.579397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.579535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.787 [2024-12-07 01:03:27.579569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.787 qpair failed and we were unable to recover it. 00:36:11.787 [2024-12-07 01:03:27.579819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.579883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.580158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.580227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.580513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.580577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.580803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.580866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.581107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.581162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.581262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.581294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.581485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.581550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.581837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.582221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.582288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.582576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.582642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.582926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.583261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.583294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.583430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.583468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.583749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.583815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.584075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.584111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.584233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.584267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.584510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.584544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.584707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.584740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.584900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.584964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.585228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.585303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.585540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.585605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.585900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.585964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.586281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.586345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.586629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.586664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.586779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.586813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.587038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.587104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.587345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.587379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.587550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.587584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.587824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.587888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.588220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.588287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.588555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.588703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.588737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.588973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.589068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.589327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.589392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.589601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.589665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.589918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.590289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.590355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-07 01:03:27.590604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.788 [2024-12-07 01:03:27.590667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.590927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.590991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.591313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.591379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.591612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.591679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.591916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.591970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.592150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.592184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.592469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.592533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.592774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.592808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.592978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.593046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.593275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.593343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.593588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.593653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.593901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.593966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.594203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.594268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.594508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.594572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.594798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.594831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.594991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.595040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.595320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.595386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.595644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.595707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.595919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.595984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.596258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.596324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.596597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.596661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.596952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.597246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.597281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.597426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.597475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.597607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.597640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.597839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.597905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.598205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.598272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.598524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.598588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.598877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.598942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.599238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.599273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.599414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.599449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.599655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.599719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.600025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.600094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.600390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.600454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.600747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.600811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.601100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.601168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.601468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.601532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.601711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.601790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-07 01:03:27.602077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.789 [2024-12-07 01:03:27.602145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.602396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.602750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.602815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.603107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.603532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.603778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.603852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.604056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.604109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.604318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.604370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.604516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.604568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.604742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.604793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.605055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.605123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.605330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.605382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.605548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.605601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.605814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.605867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.606090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.606157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.606446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.606511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.606795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.606859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.607112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.607155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.607276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.607309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.607474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.607539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.607801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.607836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.608009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.608045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.608297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.608362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.608570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.608624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.608767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.608800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.608971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.609321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.609386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.609582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.609647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.609904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.609968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.610256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.610467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.610543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.610789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.610854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.611080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.611399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.611434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.611574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.611607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.611819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.611885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.612219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.612286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.612516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.790 [2024-12-07 01:03:27.612550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-07 01:03:27.612646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.612677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.612796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.612827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.613078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.613143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.613391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.613424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.613544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.613575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.613714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.613770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.613941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.613990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.614127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.614188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.614378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.614442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.614671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.614735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.614981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.615066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.615308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.615342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.615542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.615683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.615717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.615975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.616019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.616157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.616191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.616414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.616450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.616635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.616668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.616770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.616801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.616987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.617201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.617352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.617505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.617645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.617779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.617811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.618014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.618079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.618304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.618337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.618761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.618826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.619107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.619248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.619393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.619585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.791 [2024-12-07 01:03:27.619738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.791 qpair failed and we were unable to recover it. 00:36:11.791 [2024-12-07 01:03:27.619949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.620058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.620311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.620376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.620614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.620819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.621040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.621107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.621412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.621444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.621617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.621761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.621812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.622040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.622074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.622203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.622237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.622484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.622517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.622622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.622803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.622834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.623066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.623132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.623349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.623413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.623658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.623722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.623990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.624032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.624223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.624288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.624550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.624584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.624729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.624763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.624968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.625047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.625294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.625358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.625602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.625668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.625888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.625952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.626265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.626331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.626603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.626641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.626800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.626833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.627031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.627101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.627286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.627350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.627540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.627604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.627833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.627912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.628241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.628321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.628591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.628669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.628885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.628918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.629053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.629088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.629307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.629388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.792 [2024-12-07 01:03:27.629692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.792 [2024-12-07 01:03:27.629770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.792 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.629977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.630054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.630316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.630394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.630664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.630742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.630974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.631020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.631142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.631178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.631359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.631438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.631739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.631772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.631907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.631940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.632197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.632276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.632511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.632589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.632822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.632856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.632971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.633014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.633257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.633338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.633527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.633586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.633851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.633911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.634176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.634238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.634609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.634896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.634956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.635284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.635369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.635599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.635658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.635873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.635907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.636026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.636062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.636180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.636451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.636484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.636590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.636623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.636775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.636834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.637041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.637104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.637292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.637537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.637608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.637838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.637898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.638229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.638292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.638549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.638629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.638929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.639167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.639246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.639541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.639620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.639861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.639893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.640143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.640223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.793 [2024-12-07 01:03:27.640463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.793 [2024-12-07 01:03:27.640541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.793 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.640809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.640869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.641099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.641201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.641449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.641499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.641698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.641746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.641923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.641967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.642155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.642201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.642386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.642431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.642631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.642687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.643992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.644927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.645942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.645968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.646863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.646982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.647020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.647130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.647156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.794 qpair failed and we were unable to recover it. 00:36:11.794 [2024-12-07 01:03:27.647248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.794 [2024-12-07 01:03:27.647274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.647358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.647384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.647474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.647500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.647625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.647748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.647775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.647972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.648881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.648915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.649938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.650942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.650972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.651857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.651887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.795 [2024-12-07 01:03:27.652759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.795 qpair failed and we were unable to recover it. 00:36:11.795 [2024-12-07 01:03:27.652846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.652875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.652979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.653890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.653917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.654851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.654970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.655867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.655985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.656938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.657042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.657072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.657200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.657228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.796 qpair failed and we were unable to recover it. 00:36:11.796 [2024-12-07 01:03:27.657339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.796 [2024-12-07 01:03:27.657367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.657462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.657493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.657586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.657614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.657702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.657731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.657851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.657880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.658951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.658979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.659820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.659848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.660896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.660924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.661921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.661948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.662068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.662098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.662184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.662212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.662312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.662340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.662428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.797 [2024-12-07 01:03:27.662457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.797 qpair failed and we were unable to recover it. 00:36:11.797 [2024-12-07 01:03:27.662558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.662586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.662674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.662702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.662826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.662854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.663914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.663943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.664948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.664977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.665839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.665990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.666950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.666978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.667863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.667892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.668016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.798 [2024-12-07 01:03:27.668055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.798 qpair failed and we were unable to recover it. 00:36:11.798 [2024-12-07 01:03:27.668216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.668255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.668365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.668399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.668507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.668541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.668676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.668724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.668886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.668914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.669896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.669924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.670865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.670893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.671075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.671203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.671253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.671525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.671575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.671808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.671857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.672935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.672964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.673962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.673989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.674126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.674154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.674308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.674364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.674549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.674600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.674764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.674802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.799 qpair failed and we were unable to recover it. 00:36:11.799 [2024-12-07 01:03:27.674892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.799 [2024-12-07 01:03:27.674920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.675871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.675965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.676933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.676993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.677134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.677186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.677407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.677456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.677645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.677693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.677875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.677903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.678066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.678188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.678351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.678524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.678792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.678947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.680101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.680232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.680468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.680721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.680876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.680992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.681049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.681197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.800 [2024-12-07 01:03:27.681257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.800 qpair failed and we were unable to recover it. 00:36:11.800 [2024-12-07 01:03:27.681453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.681501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.681664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.681715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.681910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.681961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.682181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.682377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.682425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.682656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.682705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.682881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.682930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.683152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.683202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.683356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.683407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.683590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.683638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.683801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.684021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.684072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.684223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.684275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.684437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.684486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.684675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.684737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.684901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.685126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.685177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.685402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.685451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.685689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.685738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.685979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.686245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.686414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.686465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.686688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.686737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.686891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.686940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.687210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.687271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.687495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.687544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.687730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.687789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.688063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.688293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.688342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.688499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.688548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.688753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.688802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.688950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.689019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.689192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.801 [2024-12-07 01:03:27.689242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.801 qpair failed and we were unable to recover it. 00:36:11.801 [2024-12-07 01:03:27.689441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.689490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.689688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.689736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.689934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.689983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.690196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.690247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.690455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.690503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.690701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.690750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.690975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.691057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.691222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.691279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.691451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.691500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.691654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.691704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.691855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.691903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.692038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.692091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.692272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.692326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.692521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.692570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.692748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.692797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.692980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.693046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.693255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.693305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.693511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.693676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.693724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.693947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.694175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.694224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.694424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.694473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.694679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.694727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.694917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.694965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.695165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.695214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.695412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.695469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.695643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.695849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.695897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.696101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.696153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.696313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.696363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.696541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.696591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.696740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.696791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.696949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.697011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.697206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.697475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.697524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.697711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.697761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.697931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.697980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.802 [2024-12-07 01:03:27.698191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.802 [2024-12-07 01:03:27.698239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.802 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.698467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.698516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.698752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.698801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.699046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.699095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.699299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.699348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.699575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.699625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.699790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.699837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.700059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.700336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.700539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.700595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.700794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.700842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.701019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.701070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.701220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.701268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.701504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.701707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.701756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.701981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.702046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.702228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.702277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.702423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.702471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.702668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.702717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.702901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.702949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.703158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.703209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.703360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.703410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.703639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.703687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.703835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.703867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.704021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.704072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.704224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.704274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.704465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.704513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.704678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.704726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.704860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.704893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.705008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.705067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.705263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.705319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.705473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.705521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.705687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.705720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.705904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.705937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.706051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.706084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.706231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.706280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.706447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.706497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.803 [2024-12-07 01:03:27.706690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.803 [2024-12-07 01:03:27.706747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.803 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.706899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.706932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.707066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.707099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.707265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.707315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.707517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.707566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.707713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.707929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.707989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.708197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.708250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.708477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.708525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.708690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.708740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.708896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.708928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.709025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.709075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.709268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.709317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.709561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.709747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.709795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.709952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.710458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.710604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.710753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.710892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.710925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.711872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.711905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.712928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.712961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.713254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.713388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.713513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.804 [2024-12-07 01:03:27.713708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.804 qpair failed and we were unable to recover it. 00:36:11.804 [2024-12-07 01:03:27.713801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.713833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.713943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.713980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.714937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.714969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.715903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.715935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.716853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.716885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.717882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.717913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.718061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.718093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.718224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.718255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.718429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.718478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.718703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.718756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.718955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.805 [2024-12-07 01:03:27.719045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.805 qpair failed and we were unable to recover it. 00:36:11.805 [2024-12-07 01:03:27.719189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.719220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.720393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.720425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.720540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.720568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.720702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.720850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.720878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.720968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.721929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.721955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.722921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.722947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.723898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.723924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.724894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.724921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.725018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.725045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.725162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.806 [2024-12-07 01:03:27.725276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.806 [2024-12-07 01:03:27.725313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.806 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.725427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.725453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.725551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.725579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.725676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.725703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.725818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.725845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.725959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.725985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.726956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.726982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.727884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.727910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.728808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.728836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.729003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.729045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.729205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.729399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.729450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.729696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.729913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.730124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.730152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.730263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.730295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.730537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.730676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.730718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.807 qpair failed and we were unable to recover it. 00:36:11.807 [2024-12-07 01:03:27.730856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.807 [2024-12-07 01:03:27.730883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.731913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.731940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.732038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.732065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.732144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.732171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.732315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.732342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.732436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.732462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.733373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.733410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.733642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.733691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.733892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.733945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.734947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.734974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.735856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.735887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.736792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.737015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.737107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.737134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.737218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.737245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.737339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.737370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.737504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.808 [2024-12-07 01:03:27.737550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.808 qpair failed and we were unable to recover it. 00:36:11.808 [2024-12-07 01:03:27.737687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.737730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.737870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.737897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.737982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.738899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.738978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.739051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.739166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.739192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.739308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.739334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.740858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.740885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.741885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.741966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.742014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.742739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.742769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.742896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.742924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.743030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.743059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.809 [2024-12-07 01:03:27.743148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.809 [2024-12-07 01:03:27.743174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.809 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.743917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.743945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.744879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.744974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.745016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.745136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.745169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.745378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.745409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.745540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.745574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.745759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.745808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.745988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.746027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.746186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.746323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.746366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.746545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.746590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.746798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.810 qpair failed and we were unable to recover it. 00:36:11.810 [2024-12-07 01:03:27.747040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.810 [2024-12-07 01:03:27.747067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.747179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.747317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.747343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.747507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.747556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.747700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.747922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.747954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.748132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.748250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.748372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.748551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.748791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.748962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.749171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.749295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.749469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.749642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.749832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.749886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.750064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.750092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.750189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.750215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.751905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.751936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.752941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.753086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.753114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.753247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.753293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.753436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.811 [2024-12-07 01:03:27.753483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.811 qpair failed and we were unable to recover it. 00:36:11.811 [2024-12-07 01:03:27.753629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.753670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.753772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.753798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.753892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.753918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.754928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.754954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.755925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.755951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.756948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.756974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.757948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.757975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.758877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.758904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.812 qpair failed and we were unable to recover it. 00:36:11.812 [2024-12-07 01:03:27.759012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.812 [2024-12-07 01:03:27.759039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.759825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.759989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.760931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.760959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.761894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.761922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.762919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.762963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.763837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.763957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.764001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.764125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.764151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.764247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.764273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.764421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.813 [2024-12-07 01:03:27.764449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.813 qpair failed and we were unable to recover it. 00:36:11.813 [2024-12-07 01:03:27.764528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.764637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.764664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.764836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.764887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.764977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.765837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.765868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.766950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.766976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.767865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.767957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.768896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.768923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.814 [2024-12-07 01:03:27.769888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.814 [2024-12-07 01:03:27.769914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.814 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.770023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.770063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.770157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.770186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.770297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.770329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.770545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.770832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.770884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.771867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.771898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.772914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.772954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.773895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.773936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.774910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.774936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.775030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.775057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.815 qpair failed and we were unable to recover it. 00:36:11.815 [2024-12-07 01:03:27.775168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.815 [2024-12-07 01:03:27.775194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.775306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.775355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.775533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.775597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.775920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.775956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.776092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.776171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.776197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.776316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.776377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.776655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.776699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.776848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.776879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.777886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.777918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.778869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.778897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.779822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.780107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.780269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.780504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.780648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.780817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.780850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.781028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.781068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.781192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.781221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.781354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.781384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.816 [2024-12-07 01:03:27.781506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.816 [2024-12-07 01:03:27.781557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.816 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.781759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.781791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.781939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.781970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.782146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.782180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.782323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.782372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.782452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.782478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.782699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.782751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.782869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.782896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.783856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.783960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.784848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.784989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.785872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.785898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.786913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.786945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.787063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.787177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.787204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.787355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.817 qpair failed and we were unable to recover it. 00:36:11.817 [2024-12-07 01:03:27.787501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.817 [2024-12-07 01:03:27.787533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.787668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.787854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.787901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.788872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.788899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.789037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.789192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.789390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.789652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.789861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.789955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.790008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.790142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.790174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.790328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.790359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.790509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.790552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.790746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.790789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.790982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.791907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.791934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.792857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.792969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.793101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.793235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.793364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.793655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.818 qpair failed and we were unable to recover it. 00:36:11.818 [2024-12-07 01:03:27.793817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.818 [2024-12-07 01:03:27.793848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.794865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.794891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.795943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.795984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.796100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.796130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.796259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.796300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.796518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.796830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.796895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.797933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.797964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.798914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.798940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.799082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.799123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.799249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.799278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.799451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.799503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.799748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.799879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.799911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.800024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.800067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.800204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.819 [2024-12-07 01:03:27.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.819 qpair failed and we were unable to recover it. 00:36:11.819 [2024-12-07 01:03:27.800403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.800434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.800606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.800633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.800748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.800775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.800912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.800945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.801072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.801101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.801242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.801269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.801393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.801420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.801630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.801662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.801844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.801903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.802906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.802933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.803110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.803235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.803711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.803842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.803973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.804890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.804930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.805106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.805269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.805456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.805622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.820 [2024-12-07 01:03:27.805850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.820 qpair failed and we were unable to recover it. 00:36:11.820 [2024-12-07 01:03:27.805983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.806925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.806951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.807862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.807976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.808904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.808944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.809248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.809431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.809632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.809820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.809985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.810132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.810279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.810446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.810742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.810941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.810972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.811119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.811147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.811275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.811306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.811528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.811579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.811710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.811743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.811878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.821 [2024-12-07 01:03:27.811904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.821 qpair failed and we were unable to recover it. 00:36:11.821 [2024-12-07 01:03:27.812014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.812870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.812978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.813831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.813858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.814009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.814039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.814177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.814204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.814325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.814352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.814504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.814557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.814780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.814846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.815932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.815958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.816074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.816114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.816262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.816317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.816457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.816484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.816614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.816658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.816852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.816897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.817080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.817109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.817248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.817292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.817460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.817516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.817742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.817958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.818022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.818142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.818182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.818280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.818308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.822 [2024-12-07 01:03:27.818426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.822 [2024-12-07 01:03:27.818454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.822 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.818586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.818757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.818805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.818941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.818972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.819918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.819947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.820102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.820230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.820699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.820941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.821187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.821344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.821463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.821686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.821959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.822884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.822913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.823070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.823110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.823231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.823259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.823381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.823424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.823657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.823701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.823920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.823963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.824141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.824167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.824281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.824308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.824448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.824475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.824593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.824643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.824778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.824825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.825042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.823 [2024-12-07 01:03:27.825130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.823 [2024-12-07 01:03:27.825157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.823 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.825250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.825276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.825431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.825559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.825613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.825819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.825985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.826890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.826916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.827933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.827962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.828063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.828090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.828205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.828232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.828368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.828870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.828901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.829944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.829974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.830097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.830124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.830219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.830245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.824 [2024-12-07 01:03:27.830353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.824 [2024-12-07 01:03:27.830397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.824 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.830581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.830624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.830799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.830855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.831082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.831110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.831196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.831223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.831360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.831387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.831501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.831528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.831735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.831800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.832827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.832891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.833143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.833170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.833303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.833430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.833496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.833744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.833813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.834941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.834990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.835103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.835234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.835376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.835538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.835738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.835980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.836874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.836903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.825 [2024-12-07 01:03:27.837801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.825 [2024-12-07 01:03:27.837832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.825 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.837987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.838942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.838968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.839913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.839943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.840841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.840869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.841006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.841047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.841165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.841193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.841334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.841362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.841476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.841503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.841671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.841730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.842037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.842082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.842178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.842207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.842437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.842502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.842816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.842895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.843873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.826 [2024-12-07 01:03:27.844836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.826 [2024-12-07 01:03:27.844862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.826 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.844957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.845954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.846072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.846113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.846221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.846254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.846399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.846448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.846659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.846692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.846911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.846938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.847051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.847080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.847224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.847251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.847363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.847413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.847616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.847650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.847884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.847915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.848953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.848980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.849095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.849135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.849263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.849303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.849491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.849555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.849898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.850907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.850933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.851946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.851972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.827 [2024-12-07 01:03:27.852101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.827 [2024-12-07 01:03:27.852141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.827 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.852249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.852279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.852394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.852422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.852538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.852565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.852708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.852734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.852843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.852869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.853905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.853932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.854897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.854989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.855875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.855985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.856903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.856980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.857023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.857113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.857140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.857230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.857257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.857366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.857392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.828 qpair failed and we were unable to recover it. 00:36:11.828 [2024-12-07 01:03:27.857505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.828 [2024-12-07 01:03:27.857537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.857640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.857671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.857799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.857831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.857945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.857972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.858866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.858899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.859060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.859100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.859233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.859295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.859538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.859592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.859747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.859799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.859912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.859939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.860748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.860974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.861189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.861305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.861507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.861670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.861940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.861974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.829 [2024-12-07 01:03:27.862123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.829 [2024-12-07 01:03:27.862151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.829 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.862315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.862469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.862776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.862891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.862921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.863825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.863860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.864799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.864954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.865890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.865927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.866877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.866976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.867853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.867966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.868011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.868124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.868152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.868234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.868260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.868406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.868605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.830 [2024-12-07 01:03:27.868657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.830 qpair failed and we were unable to recover it. 00:36:11.830 [2024-12-07 01:03:27.868822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.868852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.868962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.868989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.869832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.869956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.870141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.870307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.870602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.870882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.870936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.871850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.871884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.872832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.872962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.873861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.873892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.874868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.874992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.875159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.875302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.875414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.875527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.831 qpair failed and we were unable to recover it. 00:36:11.831 [2024-12-07 01:03:27.875651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.831 [2024-12-07 01:03:27.875688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.875809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.875843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.875975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.876868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.876960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.877899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.878892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.878973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.879954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.879979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.880898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.880983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.881951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.881991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.832 qpair failed and we were unable to recover it. 00:36:11.832 [2024-12-07 01:03:27.882132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.832 [2024-12-07 01:03:27.882161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.882278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.882328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.882469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.882542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.882778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.882811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.883107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.883267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.883450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.883738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.883884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.883910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.884912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.885961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.885987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.886946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.886973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.833 [2024-12-07 01:03:27.887903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.833 [2024-12-07 01:03:27.887932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.833 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.888819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.888892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.889945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.889971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.890883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.890913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:11.834 [2024-12-07 01:03:27.891769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.834 [2024-12-07 01:03:27.891795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:11.834 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.891909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.891936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.892883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.892969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.893002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.893113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.893140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.893219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.893246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.893338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.122 [2024-12-07 01:03:27.893364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.122 qpair failed and we were unable to recover it. 00:36:12.122 [2024-12-07 01:03:27.893471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.893502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.893608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.893639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.893757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.893803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.893931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.893960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.894921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.894951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.895906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.895986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.896898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.897923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.897957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.123 [2024-12-07 01:03:27.898852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.123 [2024-12-07 01:03:27.898884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.123 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.899915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.899943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.900868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.900894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.901946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.901985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.902889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.902922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.903893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.903924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.904036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.904064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.904155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.904186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.904324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.124 [2024-12-07 01:03:27.904356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.124 qpair failed and we were unable to recover it. 00:36:12.124 [2024-12-07 01:03:27.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.904634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.904668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.904804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.904979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.905891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.906882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.906921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.907950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.907976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.908877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.908967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.909131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.909267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.909373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.909490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.125 [2024-12-07 01:03:27.909633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.125 [2024-12-07 01:03:27.909660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.125 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.909749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.909780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.909927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.909953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.910959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.910989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.911962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.911991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.912848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.912880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.913962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.914890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.914984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.915020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.915113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.126 [2024-12-07 01:03:27.915139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.126 qpair failed and we were unable to recover it. 00:36:12.126 [2024-12-07 01:03:27.915228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.915351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.915502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.915662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.915801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.915955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.915985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.916855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.916889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.917908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.917936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.918942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.918982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.127 [2024-12-07 01:03:27.919962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.127 [2024-12-07 01:03:27.919988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.127 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.920897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.920937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.921922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.921956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.922098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.922221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.922390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.922664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.922869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.922982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.923117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.923241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.923377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.923531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.923779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.923826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.924917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.924945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.128 qpair failed and we were unable to recover it. 00:36:12.128 [2024-12-07 01:03:27.925676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.128 [2024-12-07 01:03:27.925702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.925806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.925853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.925939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.925968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.926819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.926853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.927993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.928119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.928145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.928360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.928604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.928656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.928795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.928821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.928931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.928957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.929097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.929131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.929265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.929296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.929426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.929485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.929791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.929857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.930022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.930066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.930151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.930177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.930325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.930374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.930692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.931901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.931933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.932101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.932129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.932218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.932244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.932398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.932428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.932595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.129 [2024-12-07 01:03:27.932660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.129 qpair failed and we were unable to recover it. 00:36:12.129 [2024-12-07 01:03:27.932962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.932992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.933160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.933186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.933323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.933394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.933644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.933722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.933907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.933948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.934064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.934092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.934179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.934206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.934316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.934343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.934568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.934615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.934901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.935902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.935976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.936126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.936153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.936272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.936319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.936458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.936485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.936628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.936687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.936929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.937012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.937138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.937165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.937252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.937279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.937473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.937539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.937820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.937884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.938108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.938135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.938250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.938440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.938467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.938582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.938857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.938884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.939093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.939121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.939262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.939288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.939439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.939504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.939750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.939815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.940023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.940050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.940165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.940192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.940282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.940310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.940419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.130 [2024-12-07 01:03:27.940446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.130 qpair failed and we were unable to recover it. 00:36:12.130 [2024-12-07 01:03:27.940577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.940972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.941061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.941181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.941208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.941331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.941361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.941603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.941666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.941943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.942045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.942190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.942217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.942331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.942357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.942553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.942797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.942863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.943081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.943108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.943191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.943217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.943379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.943617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.943662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.943945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.944923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.944968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.945091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.945119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.945237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.945263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.945379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.945405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.945563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.945615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.945818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.945876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.946058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.946121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.946279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.946309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.946483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.946547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.946770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.946841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.947113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.947141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.947241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.947555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.131 [2024-12-07 01:03:27.947622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.131 qpair failed and we were unable to recover it. 00:36:12.131 [2024-12-07 01:03:27.947860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.947925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.948147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.948214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.948486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.948550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.948734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.948801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.949055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.949087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.949240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.949270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.949437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.949505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.949753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.949818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.950054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.950085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.950241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.950291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.950459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.950541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.950751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.950818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.951066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.951102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.951256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.951319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.951605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.951673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.951893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.951959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.952149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.952338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.952368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.952467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.952529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.952784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.952850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.953052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.953082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.953215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.953245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.953343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.953412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.953732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.953804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.954059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.954090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.954215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.954245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.954427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.954492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.954687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.954752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.954946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.955043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.955211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.955277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.955460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.955525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.955757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.955821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.956035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.956067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.956193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.956223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.956434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.956498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.956767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.956814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.957014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.957065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.132 [2024-12-07 01:03:27.957158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.132 [2024-12-07 01:03:27.957188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.132 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.957318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.957349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.957524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.957624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.957886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.958055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.958088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.958220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.958252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.958538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.958629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.958924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.958990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.959151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.959181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.959341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.959412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.959592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.959671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.959886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.959958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.960179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.960210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.960343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.960374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.960499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.960530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.960715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.960933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.960964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.961107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.961138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.961276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.961341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.961627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.961693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.961990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.962058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.962183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.962214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.962338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.962368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.962495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.962559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.962810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.962841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.963080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.963147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.963397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.963461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.963757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.964111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.964382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.964448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.964690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.964756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.965063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.965109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.965270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.965346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.965628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.965692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.965969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.966072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.966269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.966334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.966576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.966641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.966898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.133 qpair failed and we were unable to recover it. 00:36:12.133 [2024-12-07 01:03:27.967174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.133 [2024-12-07 01:03:27.967240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.967446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.967513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.967738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.967804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.968068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.968135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.968385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.968752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.968817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.969080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.969147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.969442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.969506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.969760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.969825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.970112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.970179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.970418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.970482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.970772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.970837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.971128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.971194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.971460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.971524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.971826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.971902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.972173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.972242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.972492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.972560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.972860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.973230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.973295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.973556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.973622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.973827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.973896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.974180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.974250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.974524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.974588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.974808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.974854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.975094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.975163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.975432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.975498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.975689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.975765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.976056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.976122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.976372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.976438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.976738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.976814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.977065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.977132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.977387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.977453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.977673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.977738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.977976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.978067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.978320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.978385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.978617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.978681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.978924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.978989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.979228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.979295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.134 qpair failed and we were unable to recover it. 00:36:12.134 [2024-12-07 01:03:27.979543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.134 [2024-12-07 01:03:27.979608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.979905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.979951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.980192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.980262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.980574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.980649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.980940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.981023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.981283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.981329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.981514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.981568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.981785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.981850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.982143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.982212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.982515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.982591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.982845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.982910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.983101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.983179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.983427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.983495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.983749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.983816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.984079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.984146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.984437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.984484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.984663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.984750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.985032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.985089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.985264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.985309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.985516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.985581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.985889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.985954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.986296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.986362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.986611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.986675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.987017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.987306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.987371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.987663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.987727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.988034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.988110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.988333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.988398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.988651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.988718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.989050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.989313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.989379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.989661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.989707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.989955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.990043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.990337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.990402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.990619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.990685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.990937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.990983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.991178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.991253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.991544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.135 [2024-12-07 01:03:27.991610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.135 qpair failed and we were unable to recover it. 00:36:12.135 [2024-12-07 01:03:27.991835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.991899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.992174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.992221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.992409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.992455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.992607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.992684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.992894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.992962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.993275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.993340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.993639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.993703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.993948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.994007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.994236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.994290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.994488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.994552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.994850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.994915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.995194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.995261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.995504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.995570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.995853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.995918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.996241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.996308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.996546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.996612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.996862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.996930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.997252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.997299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.997481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.997549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.997791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.997856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.998157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.998224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.998489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.998556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.998860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.998925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.999280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.999518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:27.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:27.999872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.000108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.000175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.000415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.000483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.000670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.000743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.001045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.001112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.001438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.001642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.001709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.002099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.136 qpair failed and we were unable to recover it. 00:36:12.136 [2024-12-07 01:03:28.002373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.136 [2024-12-07 01:03:28.002438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.002734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.002799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.003026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.003094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.003301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.003370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.003660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.003725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.003988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.004086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.004377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.004443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.004696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.004760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.005057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.005125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.005414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.005480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.005782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.006069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.006135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.006436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.006501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.006745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.006814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.007030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.007097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.007352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.007628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.007674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.007855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.007934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.008202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.008249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.008389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.008436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.008585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.008632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.008806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.008884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.009130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.009178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.009356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.009403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.009615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.009694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.009991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.010061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.010274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.010343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.010649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.010695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.010875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.010960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.011317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.011606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.011670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.011930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.011976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.012318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.012611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.012676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.013056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.013347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.013414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.013708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.013773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.014037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.137 [2024-12-07 01:03:28.014104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.137 qpair failed and we were unable to recover it. 00:36:12.137 [2024-12-07 01:03:28.014404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.014479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.014746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.014813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.015043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.015110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.015393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.015459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.015749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.015795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.016050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.016116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.016405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.016469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.016699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.016763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.017021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.017088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.017338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.017403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.017667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.017735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.018029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.018107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.018395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.018461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.018790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.019104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.019394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.019458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.019703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.019768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.020062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.020118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.020282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.020328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.020485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.020565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.020758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.021066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.021133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.021380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.021447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.021639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.021713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.021956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.022043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.022339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.022405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.022645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.022712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.023012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.023079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.023336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.023401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.023663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.023982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.024040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.024261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.024327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.024591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.024659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.025046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.025373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.025596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.025660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.025957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.026049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.026288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.138 [2024-12-07 01:03:28.026353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.138 qpair failed and we were unable to recover it. 00:36:12.138 [2024-12-07 01:03:28.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.026650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.026893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.026957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.027296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.027586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.027651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.027943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.028043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.028340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.028406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.028626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.028695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.028990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.029076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.029438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.029676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.029740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.030044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.030121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.030374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.030438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.030680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.030747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.031005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.031073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.031330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.031396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.031600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.031666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.031937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.032028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.032285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.032352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.032601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.032669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.032923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.033017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.033271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.033337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.033647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.033901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.033968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.034239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.034307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.034601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.034667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.034928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.034993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.035307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.035373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.035631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.035698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.035887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.035951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.036234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.036301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.036560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.036623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.036855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.036919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.037237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.037309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.037572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.037639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.037891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.037956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.038265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.038340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.038594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.139 [2024-12-07 01:03:28.038659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.139 qpair failed and we were unable to recover it. 00:36:12.139 [2024-12-07 01:03:28.038941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.039024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.039310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.039375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.039664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.039729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.039972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.040067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.040353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.040418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.040708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.041077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.041144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.041438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.041503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.041725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.041794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.042062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.042131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.042425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.042489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.042749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.042816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.043106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.043172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.043419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.043484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.043740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.043806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.044184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.044483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.044547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.044850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.044925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.045150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.045215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.045507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.045573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.045882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.045948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.046268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.046339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.046635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.046711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.047017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.047085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.047381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.047445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.047704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.047770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.048031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.048098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.048344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.048408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.048947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.049328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.049403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.049706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.049771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.050034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.050101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.050314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.050381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.050669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.050733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.051040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.051107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.051406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.051471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.051723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.140 [2024-12-07 01:03:28.051788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.140 qpair failed and we were unable to recover it. 00:36:12.140 [2024-12-07 01:03:28.052047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.052115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.052417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.052493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.052806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.052871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.053078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.053147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.053455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.053527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.053832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.053898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.054158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.054226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.054472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.054539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.054788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.054856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.055154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.055231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.055534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.055600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.055900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.055975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.056239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.056304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.056604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.056669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.056909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.056975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.057283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.057350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.057610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.057676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.057935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.058018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.058281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.058349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.058648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.058724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.058977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.059062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.059312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.059625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.059690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.059982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.060083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.060339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.060414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.060695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.060949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.061033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.061323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.061387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.061626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.061693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.062013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.062081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.062378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.062454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.062757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.062821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.141 [2024-12-07 01:03:28.063081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.141 [2024-12-07 01:03:28.063152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.141 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.063738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.063814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.064105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.064172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.064464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.064527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.064771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.065105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.065174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.065426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.065493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.065784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.065850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.066145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.066210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.066501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.066566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.066852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.066917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.067224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.067301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.067589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.067653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.067900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.067965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.068254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.068320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.068510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.068577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.068834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.068899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.069278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.069510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.069579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.069871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.069938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.070203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.070268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.070562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.070627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.070932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.071211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.071278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.071580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.071655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.072027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.072281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.072348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.072550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.072615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.072861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.072929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.073217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.073284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.073601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.073821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.073898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.074171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.074238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.074496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.074562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.074846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.074911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.075298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.075545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.075609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-12-07 01:03:28.075902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.142 [2024-12-07 01:03:28.075967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.076293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.076360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.076693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.076978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.077070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.077320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.077388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.077673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.077741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.078049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.078118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.078399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.078464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.078766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.078831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.079074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.079141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.079336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.079401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.079589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.079655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.079939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.080018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.080327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.080623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.080688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.080891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.080957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.081262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.081337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.081615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.081680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.081962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.082058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.082294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.082359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.082657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.082733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.083056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.083125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.083417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.083788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.083853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.084103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.084170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.084454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.084520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.084759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.084825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.085121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.085207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.085544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.085628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.085845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.085917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.086168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.086236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.086440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.086505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.086767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.086833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.087117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.087187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.087424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.087500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.087747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.087813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.088107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.088174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.088434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.088499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-12-07 01:03:28.088717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.143 [2024-12-07 01:03:28.088781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.089021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.089087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.089377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.089443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.089668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.089733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.089969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.090049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.090276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.090341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.090630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.090695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.090943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.091040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.091297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.091362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.091652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.092032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.092100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.092346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.092411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.092710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.092784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.093010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.093078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.093450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.093695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.093760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.094059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.094137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.094424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.094489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.094736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.094801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.095047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.095113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.095414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.095491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.095740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.095805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.096069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.096137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.096434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.096509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.096807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.096872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.097173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.097250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.097508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.097575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.097886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.097961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.098272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.098338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.098581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.098646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.098932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.099012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.099307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.099372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.099582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.099647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.099841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.099909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.100183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.100250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.100553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.100617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.100862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.100939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.101256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.101354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.144 qpair failed and we were unable to recover it. 00:36:12.144 [2024-12-07 01:03:28.101687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.144 [2024-12-07 01:03:28.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.102055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.102125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.102425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.102490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.102728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.102792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.103050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.103408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.103473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.103721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.103788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.104016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.104094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.104278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.104486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.104535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.104735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.104784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.104942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.104990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.105233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.105299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.105555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.105701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.105747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.105896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.105943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.106148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.106210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.106462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.106522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.106724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.107047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.107109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.107370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.107435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.107687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.107751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.108023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.108101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.108307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.108366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.108595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.108654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.108891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.108955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.109266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.109331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.109555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.109619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.109853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.109917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.110225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.110301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.110589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.110925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.110990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.111231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.111296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.111508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.145 [2024-12-07 01:03:28.111576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.145 qpair failed and we were unable to recover it. 00:36:12.145 [2024-12-07 01:03:28.111811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.111875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.112084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.112151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.112395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.112460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.112718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.112782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.113038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.113105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.113336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.113402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.113659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.113723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.113973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.114054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.114262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.114327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.114555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.114619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.114927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.115277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.115341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.115605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.115669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.115958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.116044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.116254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.116319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.116562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.116626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.116916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.116981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.117231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.117299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.117564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.117629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.117934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.118019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.118326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.118391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.118645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.118713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.118958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.119041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.119291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.119356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.119708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.119953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.120037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.120268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.120332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.120601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.120667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.120875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.120940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.121205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.121273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.121534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.121598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.121858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.121923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.122200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.122276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.122523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.122589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.122875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.122950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.123279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.123346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.123612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.123677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.123929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.146 [2024-12-07 01:03:28.124012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.146 qpair failed and we were unable to recover it. 00:36:12.146 [2024-12-07 01:03:28.124234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.124300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.124518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.124793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.124857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.125117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.125184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.125449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.125707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.125771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.125992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.126072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.126322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.126387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.126648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.126712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.126963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.127040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.127298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.127364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.127649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.127933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.128010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.128258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.128323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.128624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.128699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.128950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.129052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.129270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.129336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.129588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.129652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.129900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.129967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.130213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.130277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.130489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.130553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.130815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.130890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.131133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.131199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.131460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.131524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.131751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.131816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.131993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.132076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.132280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.132344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.132603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.132669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.132928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.132992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.133266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.133558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.133622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.133911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.133976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.134238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.134303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.134517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.134826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.134889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.135128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.135195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.135455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.135518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.135765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.147 [2024-12-07 01:03:28.135829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.147 qpair failed and we were unable to recover it. 00:36:12.147 [2024-12-07 01:03:28.136079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.136145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.136444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.136507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.136794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.136859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.137108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.137173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.137422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.137489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.137729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.137793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.138019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.138085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.138383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.138447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.138732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.138797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.139151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.139365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.139690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.139756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.140027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.140095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.140345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.140408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.140655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.140719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.140944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.141037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.141339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.141405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.141657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.141720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.142059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.142351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.142416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.142676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.142950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.143043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.143292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.143369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.143662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.143732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.144013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.144081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.144328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.144393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.144693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.144758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.145037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.145104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.145397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.145461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.145717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.145782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.145991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.146071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.146404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.146659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.146725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.146974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.147058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.147312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.147385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.147640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.147699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.147909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.147970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.148 [2024-12-07 01:03:28.148263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.148 qpair failed and we were unable to recover it. 00:36:12.148 [2024-12-07 01:03:28.148554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.148614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.148854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.148914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.149135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.149196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.149404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.149464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.149660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.149722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.149909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.149968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.150235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.150295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.150483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.150544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.150762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.150822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.151059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.151121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.151346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.151405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.151623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.151683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.151892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.151952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.152171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.152241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.152439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.152501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.152745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.152805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.153024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.153085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.153337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.153398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.153581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.153641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.153865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.153926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.154135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.154197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.154433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.154493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.154667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.154729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.154942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.155018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.155229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.155294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.155485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.155547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.155759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.155819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.156098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.156159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.156376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.156437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.156649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.156710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.156900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.156958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.157237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.157297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.157547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.158137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.158199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.158375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.158435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.158641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.158700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.158927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.149 [2024-12-07 01:03:28.158990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.149 qpair failed and we were unable to recover it. 00:36:12.149 [2024-12-07 01:03:28.159250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.159320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.159501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.159561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.159830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.159900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.160166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.160227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.160455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.160518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.160732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.160792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.161047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.161109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.161314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.161374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.161590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.161650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.161927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.161987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.162236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.162300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.162528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.162588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.162849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.163181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.163375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.163434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.163629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.163689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.163965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.164289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.164350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.164546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.164608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.164842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.164901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.165102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.165165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.165372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.165432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.165628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.165915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.165976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.166241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.166302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.166495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.166554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.166739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.166798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.167042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.167105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.167333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.167394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.167628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.167992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.168067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.168303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.168363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.168551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.168610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.150 qpair failed and we were unable to recover it. 00:36:12.150 [2024-12-07 01:03:28.168827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.150 [2024-12-07 01:03:28.168887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.169113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.169174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.169396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.169455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.169641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.169701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.169900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.169958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.170249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.170314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.170545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.170902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.170965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.171302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.171530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.171591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.171801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.171861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.172086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.172148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.172363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.172424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.172664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.172722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.172959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.173037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.173278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.173339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.173522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.173580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.173815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.173874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.174080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.174141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.174351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.174410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.174639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.174698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.174937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.175011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.175189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.175249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.175449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.175508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.175728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.175789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.176071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.176132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.176371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.176430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.176620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.176681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.176910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.177173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.177427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.177486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.177683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.177742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.178039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.178100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.178338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.178397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.178658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.178717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.178890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.178947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.179166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.179226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.179487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.179546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.151 [2024-12-07 01:03:28.179746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.151 [2024-12-07 01:03:28.179804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.151 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.180033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.180095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.180291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.180351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.180578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.180636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.180833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.180892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.181105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.181165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.181409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.181468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.181707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.181768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.181980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.182051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.182288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.182348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.182630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.182897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.182957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.183169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.183229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.183418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.183477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.183760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.183819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.184067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.184129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.184367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.184427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.184606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.184665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.184880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.184939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.185200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.185260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.185484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.185543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.185744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.185802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.185991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.186065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.186331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.186390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.186661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.186719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.186940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.187010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.187240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.187318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.187527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.187586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.187804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.187865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.188138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.188199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.188405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.188465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.188633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.188692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.188862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.188924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.189170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.189232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.189473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.189533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.189726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.189785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.190029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.190089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.190322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.190381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.190625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.152 [2024-12-07 01:03:28.190685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.152 qpair failed and we were unable to recover it. 00:36:12.152 [2024-12-07 01:03:28.190907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.190967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.191207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.191270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.191510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.191570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.191817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.191877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.192115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.192176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.192418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.192477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.192665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.192724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.192970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.193293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.193351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.193530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.193589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.193794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.194059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.194121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.194301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.194360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.194551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.194611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.194827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.194894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.195111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.195172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.195398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.195458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.195704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.195763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.195943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.196019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.196214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.196274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.196490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.196548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.196779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.196838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.197050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.197111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.197309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.197370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.197570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.197629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.197831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.197890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.198114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.198176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.198452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.198511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.198713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.198772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.198952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.199249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.199309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.199565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.199625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.199838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.200151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.200212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.200437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.200494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.200775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.200834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.201045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.201107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.201349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.153 [2024-12-07 01:03:28.201409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.153 qpair failed and we were unable to recover it. 00:36:12.153 [2024-12-07 01:03:28.201646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.201705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.201944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.202021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.202226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.202286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.202516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.202577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.202822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.202882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.203072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.203133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.203328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.203388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.203608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.203668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.203857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.203916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.204128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.204189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.204395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.204455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.204673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.204731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.204926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.204984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.205280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.205524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.205583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.205810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.205870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.206160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.206227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.206476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.206537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.206798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.206976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.207052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.207247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.207306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.207568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.207821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.207879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.208130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.208401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.208460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.208693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.208754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.209064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.209128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.209350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.209409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.209682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.209742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.210017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.210079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.210276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.210335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.210552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.210612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.210837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.210895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.154 [2024-12-07 01:03:28.211108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.154 [2024-12-07 01:03:28.211170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.154 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.211339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.211398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.211587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.211646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.211849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.211908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.212126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.212187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.212464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.212523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.212753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.212812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.213045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.213108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.213370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.213436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.213645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.213704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.213929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.213987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.214266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.214334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.214582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.214910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.214968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.215214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.215273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.215468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.215526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.215765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.215825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.216021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.216083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.216280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.216339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.216622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.216873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.216933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.217199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.217260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.217466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.217525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.217700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.217759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.218029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.218092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.218282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.218342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.218549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.218609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.218837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.218896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.219101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.219162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.219362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.219425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.219666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.219727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.219920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.219979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.220238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.220302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.220773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.220831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.221073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.221135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.221390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.221453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.221654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.221713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.155 [2024-12-07 01:03:28.221948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.155 [2024-12-07 01:03:28.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.155 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.222245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.222306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.222537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.222596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.222842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.222900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.223145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.223208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.223479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.223539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.223728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.223789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.223972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.224053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.224286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.224345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.224528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.224587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.224794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.224858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.225064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.225126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.225313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.225373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.225623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.225890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.225950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.226164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.226225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.226411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.226469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.226654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.226714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.227027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.227274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.227333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.227547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.227607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.227906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.228164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.228225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.228406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.228465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.228684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.228742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.228971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.229056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.229304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.229364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.229601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.229870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.229929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.230198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.230259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.230521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.230580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.230773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.230836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.231100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.231293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.231351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.231628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.231689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.231935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.232012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.232222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.232505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.232563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.156 [2024-12-07 01:03:28.232864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.156 qpair failed and we were unable to recover it. 00:36:12.156 [2024-12-07 01:03:28.233107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.233170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.233363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.233422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.233659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.233719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.234021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.234084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.234435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.234648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.234708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.234904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.234963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.235180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.235242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.235475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.235535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.235757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.235815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.236021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.236082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.236277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.236337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.236536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.236594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.236892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.237093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.237155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.237366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.237426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.237632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.237695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.237975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.238058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.238249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.238504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.238567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.238867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.239105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.239166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.239353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.239664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.239724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.240016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.240214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.240274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.240542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.240601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.240841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.241126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.241190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.241435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.241494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.241719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.242036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.242119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.242417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.242483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.157 [2024-12-07 01:03:28.242707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.157 [2024-12-07 01:03:28.242768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.157 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.243043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.243106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.243323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.243383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.243618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.243678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.243872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.243932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.244140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.244200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.244432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.244492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.244762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.244821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.245030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.245091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.245274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.245333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.245614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.245824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.245882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.246088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.246149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.246334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.439 [2024-12-07 01:03:28.246392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-12-07 01:03:28.246604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.246664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.246881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.246940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.247142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.247206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.247440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.247499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.247691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.247752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.247958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.248039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.248230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.248480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.248539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.248825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.249124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.249359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.249420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.249661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.249719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.249965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.250038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.250304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.250363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.250579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.250641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.250868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.250927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.251187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.251248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.251525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.251781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.251840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.252065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.252127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.252407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.252467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.252685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.252937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.253015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.253285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.253346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.253544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.253843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.253902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.254165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.254227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.254428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.254487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.254686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.254746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.254971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.255209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.255269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.255513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.255572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.255817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.255876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.256095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.256156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.256342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.256401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.256680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.256738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.256956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.257056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.257299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.440 [2024-12-07 01:03:28.257358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.440 qpair failed and we were unable to recover it. 00:36:12.440 [2024-12-07 01:03:28.257594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.257653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.257928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.257985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.258217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.258277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.258482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.258541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.258787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.258846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.259078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.259138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.259337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.259399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.259656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.259714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.259952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.260028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.260219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.260278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.260561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.260620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.260803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.260862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.261069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.261132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.261328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.261386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.261658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.261718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.261957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.262029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.262248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.262307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.262512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.262851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.262910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.263124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.263185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.263519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.263798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.263857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.264102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.264163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.264402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.264463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.264693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.264752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.265051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.265112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.265366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.265426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.265654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.265712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.265985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.266063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.266307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.266367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.266597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.266658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.266887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.267142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.267202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.267469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.267529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.267795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.267855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.268139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.268200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.268437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.268496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.268709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.441 [2024-12-07 01:03:28.268768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.441 qpair failed and we were unable to recover it. 00:36:12.441 [2024-12-07 01:03:28.269046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.269106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.269408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.269472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.269751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.269974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.270299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.270360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.270602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.270662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.270846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.270907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.271151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.271214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.271451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.271512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.271788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.272032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.272093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.272326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.272385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.272661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.272720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.272899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.273243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.273303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.273593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.273652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.273881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.273942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.274155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.274216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.274446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.274507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.274740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.274798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.275039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.275101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.275309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.275367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.275572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.275631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.275920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.276132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.276193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.276430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.276489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.276786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.276964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.277041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.277238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.277508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.277567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.277784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.277843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.278100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.278160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.278507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.278684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.278743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.278984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.279054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.279261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.279320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.279555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.279615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.279839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.442 [2024-12-07 01:03:28.279897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.442 qpair failed and we were unable to recover it. 00:36:12.442 [2024-12-07 01:03:28.280182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.280242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.280475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.280534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.280780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.281076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.281137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.281390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.281449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.281691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.281750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.282030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.282091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.282312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.282371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.282611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.282671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.282909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.282968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.283229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.283290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.283527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.283585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.283852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.283911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.284170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.284231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.284501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.284560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.284783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.284842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.285074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.285417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.285485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.285757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.285816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.286052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.286114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.286353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.286413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.286683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.286741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.287020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.287080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.287310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.287369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.287587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.287646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.287834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.287892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.288179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.288240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.288463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.288522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.288774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.288832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.289055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.289116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.289407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.289470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.289714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.289770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.290017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.290079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.290296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.290357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.290596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.290658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.290941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.291020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.291260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.291321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.291542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.291602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.443 qpair failed and we were unable to recover it. 00:36:12.443 [2024-12-07 01:03:28.291847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.443 [2024-12-07 01:03:28.291911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.292159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.292219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.292424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.292488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.292779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.292844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.293158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.293218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.293476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.293541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.293798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.293890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.294133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.294195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.294428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.294492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.294705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.294763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.294991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.295073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.295420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.295454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.295617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.295651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.295813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.295847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.296020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.296055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.296165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.296198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.296313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.296346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.296465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.296498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.296741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.296800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.297020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.297055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.297201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.297235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.297341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.297374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.297509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.297542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.297758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.297818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.298060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.298095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.298223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.298255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.298369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.298405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.298638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.298701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.298947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.299022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.299197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.299231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.299346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.299380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.444 qpair failed and we were unable to recover it. 00:36:12.444 [2024-12-07 01:03:28.299605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.444 [2024-12-07 01:03:28.299656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.299941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.300016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.300161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.300195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.300460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.300540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.300812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.300870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.301115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.301149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.301297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.301331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.301565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.301629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.301878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.301937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.302145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.302179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.302320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.302354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.302597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.302662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.302865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.302929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.303168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.303445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.303668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.303737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.304008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.304070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.304199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.304232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.304483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.304542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.304843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.304899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.305132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.305166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.305387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.305451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.305716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.305780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.306034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.306069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.306169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.306202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.306350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.306406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.306662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.306726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.307011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.307063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.307178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.307212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.307359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.307394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.307539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.307572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.307725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.308024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.308089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.308252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.308401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.308435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.308639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.308696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.309202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.445 [2024-12-07 01:03:28.309236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.445 qpair failed and we were unable to recover it. 00:36:12.445 [2024-12-07 01:03:28.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.309409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.309633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.309688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.309933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.310190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.310224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.310388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.310453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.310654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.310710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.310957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.311041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.311167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.311201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.311390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.311444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.311694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.311748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.311985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.312058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.312239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.312272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.312471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.312525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.312733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.312788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.313057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.313092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.313262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.313296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.313405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.313439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.313610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.313665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.313900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.313955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.314166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.314201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.314383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.314437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.314654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.314709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.314902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.314951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.315193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.315242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.315504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.315559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.315773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.315830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.316066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.316117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.316378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.316434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.316633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.316689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.316901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.316955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.317244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.317292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.317525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.317589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.317817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.317873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.318130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.318180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.318376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.318429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.318689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.318746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.318983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.319051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.319230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.446 [2024-12-07 01:03:28.319285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.446 qpair failed and we were unable to recover it. 00:36:12.446 [2024-12-07 01:03:28.319556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.319607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.319937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.320186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.320237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.320506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.320562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.320780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.320832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.321029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.321087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.321340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.321395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.321575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.321632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.321892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.321943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.322127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.322180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.322365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.322422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.322632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.322687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.322945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.323014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.323247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.323302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.323552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.323608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.323813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.323868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.324046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.324292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.324347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.324564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.324619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.324865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.324920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.325149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.325205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.325428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.325484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.325695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.325749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.325960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.326029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.326322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.326591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.326645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.326860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.326916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.327157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.327214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.327390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.327444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.327715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.327771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.327989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.328060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.328275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.328329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.328579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.328825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.328882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.329097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.329440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.329681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.329891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.329946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.447 [2024-12-07 01:03:28.330226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.447 [2024-12-07 01:03:28.330282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.447 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.330467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.330522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.330742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.330796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.331065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.331123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.331448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.331706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.331760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.331978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.332046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.332326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.332543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.332600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.332804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.332858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.333107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.333365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.333421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.333675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.333730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.333981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.334050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.334267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.334323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.334524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.334580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.334796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.334851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.335107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.335164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.335352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.335407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.335663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.335718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.335934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.336253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.336308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.336464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.336519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.336744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.336807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.337070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.337128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.337420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.337636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.337691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.337967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.338195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.338251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.338462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.338516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.338672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.338727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.338948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.339019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.339240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.339295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.339550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.339605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.339831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.448 [2024-12-07 01:03:28.339886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.448 qpair failed and we were unable to recover it. 00:36:12.448 [2024-12-07 01:03:28.340105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.340350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.340407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.340620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.340676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.340929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.340985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.341295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.341349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.341632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.341710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.341971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.342069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.342292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.342369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.342693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.342781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.343066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.343128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.343428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.343511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.343805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.344136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.344214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.344482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.344561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.344821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.344898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.345200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.345288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.345552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.345628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.345864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.345925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.346203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.346281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.346482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.346560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.346834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.346893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.347210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.347292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.347676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.347906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.347965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.348189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.348267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.348880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.348939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.349188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.349267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.349537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.349597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.349835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.349898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.350171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.350262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.350572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.350633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.350903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.350963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.351273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.351357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.351653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.351731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.351976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.352055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.352344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.352432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.449 [2024-12-07 01:03:28.352667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.449 [2024-12-07 01:03:28.352745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.449 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.353023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.353083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.353375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.353452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.353718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.354062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.354123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.354376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.354465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.354799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.354979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.355056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.355331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.355698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.355941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.356015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.356268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.356345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.356674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.356761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.357073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.357159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.357483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.357566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.357866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.357954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.358216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.358279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.358535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.358758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.358826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.359083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.359148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.359370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.359453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.359736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.359820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.360121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.360203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.360563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.360665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.360953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.361050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.361388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.361474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.361828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.361933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.362265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.362644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.362721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.362984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.363063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.363324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.363421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.363755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.363836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.363943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153e5f0 (9): Bad file descriptor 00:36:12.450 [2024-12-07 01:03:28.364418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.364515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.364862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.365132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.365197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.365461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.365526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.365741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.365810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.366116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.366190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.450 [2024-12-07 01:03:28.366391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.450 [2024-12-07 01:03:28.366456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.450 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.366677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.366746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.367052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.367125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.367407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.367691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.367751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.368028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.368090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.368323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.368384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.368702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.369068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.369130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.369356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.369417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.369645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.369706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.369939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.370017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.370296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.370357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.370641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.370701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.370936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.371049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.371287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.371348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.371625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.371690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.371937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.372018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.372222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.372282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.372519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.372580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.372799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.372858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.373138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.373481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.373542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.373840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.373905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.374161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.374223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.374488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.374554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.374847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.374911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.375223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.375283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.375551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.375611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.375790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.375849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.376187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.376434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.376495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.376705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.376787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.377053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.377338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.377404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.377618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.377685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.378052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.378114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.451 qpair failed and we were unable to recover it. 00:36:12.451 [2024-12-07 01:03:28.378679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.451 [2024-12-07 01:03:28.378740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.378958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.379058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.379345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.379617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.379677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.379933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.380012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.380227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.380293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.380589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.380655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.380909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.380975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.381262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.381327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.381624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.381699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.381933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.382021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.382298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.382363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.382655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.382719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.382976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.383062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.383324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.383642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.383707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.383947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.384030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.384293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.384359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.384576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.384642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.384973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.385287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.385353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.385597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.385661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.385920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.386186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.386253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.386487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.386551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.386809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.387120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.387188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.387427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.387491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.387778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.387842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.388086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.388153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.388413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.388477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.388774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.388838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.389138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.389204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.389496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.389561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.389852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.389916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.390192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.390258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.390482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.390556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.390817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.452 [2024-12-07 01:03:28.390882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.452 qpair failed and we were unable to recover it. 00:36:12.452 [2024-12-07 01:03:28.391098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.391164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.391381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.391449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.391742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.391808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.392096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.392162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.392374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.392439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.392682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.392748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.393043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.393109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.393406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.393471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.393668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.393734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.393980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.394059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.394310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.394376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.394617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.394682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.394943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.395021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.395325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.395572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.395637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.395894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.395960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.396245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.396312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.396559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.396626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.396880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.396945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.397278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.397345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.397569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.397634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.397844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.397910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.398203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.398268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.398563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.398629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.398839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.398904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.399215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.399283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.399576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.399641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.399864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.399932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.400249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.400322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.400530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.400595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.400818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.400883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.401202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.401280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.401530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.401595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.401849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.453 [2024-12-07 01:03:28.401916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.453 qpair failed and we were unable to recover it. 00:36:12.453 [2024-12-07 01:03:28.402177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.402245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.402542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.402618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.402917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.402981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.403338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.403636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.403722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.403943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.404028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.404287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.404352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.404641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.404707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.404921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.404986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.405311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.405376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.405626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.405691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.405900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.405968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.406240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.406306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.406594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.406658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.406918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.406983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.407285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.407352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.407557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.407632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.407890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.407956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.408224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.408291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.408554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.408619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.408864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.408931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.409203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.409272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.409564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.409850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.409914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.410221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.410298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.410589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.410654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.410907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.410972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.411235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.411301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.411528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.411592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.411832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.411899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.412147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.412214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.412517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.412582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.412825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.412891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.413166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.413232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.413486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.413553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.413766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.413832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.414134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.414427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.454 [2024-12-07 01:03:28.414493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.454 qpair failed and we were unable to recover it. 00:36:12.454 [2024-12-07 01:03:28.414788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.414853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.415111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.415178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.415429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.415495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.415801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.415878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.416150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.416217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.416448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.416513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.416759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.416833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.417082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.417442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.417508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.417872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.418081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.418147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.418395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.418460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.418707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.418771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.419023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.419090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.419390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.419712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.420048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.420116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.420373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.420439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.420679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.420743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.420990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.421068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.421352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.421417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.421685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.421749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.421965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.422053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.422358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.422434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.422730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.422795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.423049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.423388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.423455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.423662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.423729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.424014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.424081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.424320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.424386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.424657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.424722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.424934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.425016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.425273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.425338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.425608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.425673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.425932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.426010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.426230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.426294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.426505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.455 [2024-12-07 01:03:28.426573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.455 qpair failed and we were unable to recover it. 00:36:12.455 [2024-12-07 01:03:28.426867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.426933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.427197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.427262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.427569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.427643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.427934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.428045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.428349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.428635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.428701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.428935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.429018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.429245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.429311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.429526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.429593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.429851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.430219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.430286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.430579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.430644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.430894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.430958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.431241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.431307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.431569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.431636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.431936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.432032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.432296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.432362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.432608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.432673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.432923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.432987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.433305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.433371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.433596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.433661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.433879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.433947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.434238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.434303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.434627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.434847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.434912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.435172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.435238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.435431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.435496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.435750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.435819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.436053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.436121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.436407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.436473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.436689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.436764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.437057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.437125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.437341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.437411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.437683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.437946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.438028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.438259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.438324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.438629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.438694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.438957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.456 [2024-12-07 01:03:28.439039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.456 qpair failed and we were unable to recover it. 00:36:12.456 [2024-12-07 01:03:28.439341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.439416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.439738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.440022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.440089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.440333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.440397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.440658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.440724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.440982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.441078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.441377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.441443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.441689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.441756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.442031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.442101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.442356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.442424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.442674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.442739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.442966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.443066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.443337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.443403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.443690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.443755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.444025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.444093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.444348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.444413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.444675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.444741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.444989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.445074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.445324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.445659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.445723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.445990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.446085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.446384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.446450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.446715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.447046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.447113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.447321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.447387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.447617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.447685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.447957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.448041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.448329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.448394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.448582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.448647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.448941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.449370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.449583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.449652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.449906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.449973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.450261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.450327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.450621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.450698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.450945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.457 [2024-12-07 01:03:28.451031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.457 qpair failed and we were unable to recover it. 00:36:12.457 [2024-12-07 01:03:28.451297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.451362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.451616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.451682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.452015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.452082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.452321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.452387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.452678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.452743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.453019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.453086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.453413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.453717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.453793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.454096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.454163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.454420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.454485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.454691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.455057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.455124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.455420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.455486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.455776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.455841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.456084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.456153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.456420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.456496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.456814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.457061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.457129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.457362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.457427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.457721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.457786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.458021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.458089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.458346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.458411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.458613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.458681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.458933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.459018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.459268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.459335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.459597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.459663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.459912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.459979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.460279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.460344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.460604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.460669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.460973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.461068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.461339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.461403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.461695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.461761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.462029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.462097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.462387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.462452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.462692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.462757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.458 [2024-12-07 01:03:28.463029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.458 [2024-12-07 01:03:28.463099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.458 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.463403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.463478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.463783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.463848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.464105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.464174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.464436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.464791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.464856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.465098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.465166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.465466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.465531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.465883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.466161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.466231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.466533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.466609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.466864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.466931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.467192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.467258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.467475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.467542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.467804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.467869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.468115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.468182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.468424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.468490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.468791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.468856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.469155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.469222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.469476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.469542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.469844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.469919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.470192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.470258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.470546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.470611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.470857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.470923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.471141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.471208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.471509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.471873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.471938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.472208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.472276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.472528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.472895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.472971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.473215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.473282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.473528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.473593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.473846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.473912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.474214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.474552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.474617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.474907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.474972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.475291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.475367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.475630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.475696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.459 [2024-12-07 01:03:28.475945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.459 [2024-12-07 01:03:28.476038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.459 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.476297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.476362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.476645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.476709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.477049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.477116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.477377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.477443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.477703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.477767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.478030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.478099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.478316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.478382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.478660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.478725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.479031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.479109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.479401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.479466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.479714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.479780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.480076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.480142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.480402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.480734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.480799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.481054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.481121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.481409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.481476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.481694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.481758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.482057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.482382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.482631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.482696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.482945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.483032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.483329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.483415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.483723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.483789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.484075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.484142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.484454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.484519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.484814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.484890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.485207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.485273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.485568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.485645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.485891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.485958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.486268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.486339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.486644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.486710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.486971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.487054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.487344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.487408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.487617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.487683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.487967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.488061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.488350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.488417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.488663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.460 [2024-12-07 01:03:28.489026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-12-07 01:03:28.489104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.460 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.489394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.489460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.489767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.489839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.490141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.490209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.490423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.490492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.490820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.491120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.491187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.491477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.491542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.491837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.491902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.492186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.492252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.492500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.492567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.492836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.492902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.493181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.493248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.493499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.493564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.493832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.493899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.494228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.494303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.494564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.494631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.494883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.494950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.495222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.495497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.495562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.495800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.495865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.496130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.496197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.496502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.496568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.496859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.496924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.497189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.497268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.497573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.497650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.497910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.497975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.498285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.498350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.498645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.498710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.498956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.499037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.499260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.499327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.499623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.499698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.500082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.500335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.500402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.500616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.500970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.501067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.501343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.501409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.501656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-12-07 01:03:28.501721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.461 qpair failed and we were unable to recover it. 00:36:12.461 [2024-12-07 01:03:28.502027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.502094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.502347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.502413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.502703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.502769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.503074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.503162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.503479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.503552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.503848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.503913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.504196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.504263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.504525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.504589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.504836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.504902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.505120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.505188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.505487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.505563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.505888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.506149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.506216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.506430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.506499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.506763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.506830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.507126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.507205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.507417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.507486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.507775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.507841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.508144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.508520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.508593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.508887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.508952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.509268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.509602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.509666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.509959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.510038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.510295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.510561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.510626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.510912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.510986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.511276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.511342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.511613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.511872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.511938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.512262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.512334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.512588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.512652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.512850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.512915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-12-07 01:03:28.513308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.462 qpair failed and we were unable to recover it. 00:36:12.462 [2024-12-07 01:03:28.513556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.513625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.513933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.514021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.514280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.514346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.514648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.514722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.515030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.515097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.515394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.515470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.515747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.515813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.516105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.516173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.516469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.516535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.516829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.516893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.517154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.517220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.517511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.517829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.517893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.518124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.518191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.518471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.518535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.518755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.518823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.519070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.519138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.519425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.519490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.519778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.520057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.520125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.520344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.520411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.520664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.520730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.520980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.521058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.521358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.521434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.521731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.521797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.522006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.522073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.522360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.522425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.522675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.522740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.523028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.523095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.523287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.523552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.523618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.523911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.523975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.524242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.524320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.524539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.524607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.524916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.524989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.525325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.525391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.525676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.525741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.463 qpair failed and we were unable to recover it. 00:36:12.463 [2024-12-07 01:03:28.526028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.463 [2024-12-07 01:03:28.526095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.526345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.526410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.526666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.526730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.526978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.527070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.527390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.527689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.527753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.528055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.528125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.528386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.528453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.528757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.528827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.529052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.529119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.529411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.529477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.529729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.529797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.530086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.530154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.530396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.530461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.530719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.530785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.530984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.531066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.531312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.531377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.531657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.531723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.531970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.532238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.532387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.532553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.532731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.532907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.532943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.533152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.533334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.533511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.533693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.533867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.533982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.534143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.534308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.534476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.534674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.534846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.534878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.535044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.535083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.535189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.535222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.535366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.464 [2024-12-07 01:03:28.535398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.464 qpair failed and we were unable to recover it. 00:36:12.464 [2024-12-07 01:03:28.535507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.535539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.535673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.535707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.535805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.535837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.536780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.536853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.537138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.537309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.537507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.537841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.537988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.538186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.538345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.538534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.538732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.538900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.538972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.539137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.539170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.539314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.539346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.539530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.539565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.539821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.540085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.540247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.540296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.540411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.540460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.540625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.540658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.540843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.540877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.541045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.541079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.541247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.541308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.541453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.541487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.541627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.541660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.541856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.541906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.542069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.542103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.542239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.542290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.542436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.542469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.542657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.465 [2024-12-07 01:03:28.542734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.465 qpair failed and we were unable to recover it. 00:36:12.465 [2024-12-07 01:03:28.543053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.543206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.543240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.543481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.543514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.543622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.543689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.543939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.543974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.544148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.544180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.544333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.544368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.544472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.544507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.544724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.544758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.544892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.544927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.545179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.545212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.545469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.545505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.545610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.545646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.545780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.545843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.546082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.546116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.546248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.546280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.546482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.546517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.546842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.546908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.547098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.547131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.547295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.547327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.547485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.547520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.547775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.547826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.548130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.548164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.548298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.548345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.548479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.548512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.548655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.548689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.548895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.548964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.549162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.549195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.549330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.549362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.549569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.549709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.549762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.550014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.550048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.550178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.550211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.550368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.550402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.550546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.550578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.550772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.550831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.551059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.551092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.551229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.466 [2024-12-07 01:03:28.551264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.466 qpair failed and we were unable to recover it. 00:36:12.466 [2024-12-07 01:03:28.551390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.551424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.551569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.551602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.551862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.551896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.552037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.552088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.552228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.552261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.552386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.552419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.552664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.552723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.552897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.552950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.553117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.553149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.553293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.553326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.553462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.553519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.553635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.553668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.553844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.553905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.554101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.554135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.554251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.554284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.554454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.554508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.554774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.554836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.555092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.555126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.555268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.555318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.555472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.555589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.555648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.556159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.556193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.556346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.556379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.556534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.556595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.556872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.556932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.557141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.557175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.557280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.557339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.557599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.557692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.557965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.558011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.558154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.558187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.558346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.558443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.558475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.467 [2024-12-07 01:03:28.558611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.467 [2024-12-07 01:03:28.558643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.467 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.558862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.558922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.559120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.559273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.559324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.559441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.559478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.559640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.559697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.559905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.559962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.560168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.560302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.560335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.560448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.560481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.560687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.560745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.560956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.561135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.561336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.561509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.561755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.561925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.562201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.562449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.562484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.562622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.562655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.562878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.562912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.563123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.563158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.563365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.563492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.563542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.563768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.563824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.564008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.564066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.564170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.564205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.564442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.564755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.564790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.564956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.565059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.565186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.565220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.468 [2024-12-07 01:03:28.565335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.468 [2024-12-07 01:03:28.565369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.468 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.565637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.565711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.565938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.566019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.566161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.566197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.566381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.566500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.566535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.566723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.756 [2024-12-07 01:03:28.566756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.756 qpair failed and we were unable to recover it. 00:36:12.756 [2024-12-07 01:03:28.566893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.567062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.567095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.567255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.567300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.567442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.567477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.567592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.567625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.567768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.567825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.568043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.568079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.568213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.568246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.568446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.568502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.568763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.568813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.569026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.569060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.569173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.569206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.569350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.569399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.569534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.569567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.569780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.569839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.570139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.570313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.570614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.570670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.570900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.570957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.571200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.571234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.571470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.571547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.571810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.571867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.572822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.572854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.573058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.573094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.573224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.573435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.573486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.573700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.573756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.573942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.574280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.574336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.574591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.574626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.574725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.574774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.757 qpair failed and we were unable to recover it. 00:36:12.757 [2024-12-07 01:03:28.574884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.757 [2024-12-07 01:03:28.574917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.575110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.575151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.575305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.575338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.575475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.575510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.575642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.575678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.575949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.576129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.576294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.576432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.576576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.576793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.576846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.577053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.577106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.577320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.577371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.577583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.577615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.577749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.577782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.577969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.578043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.578252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.578305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.578475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.578529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.578740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.578774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.578916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.578949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.579113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.579413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.579445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.579607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.579639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.579797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.579849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.580091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.580126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.580268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.580333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.580474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.580507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.580638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.580674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.580848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.580901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.581137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.581193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.581424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.581456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.581581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.581817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.581850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.581992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.582052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.582261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.582297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.582395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.582428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.582546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.582581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.758 [2024-12-07 01:03:28.582755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.758 [2024-12-07 01:03:28.582809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.758 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.583066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.583103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.583296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.583439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.583473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.583653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.583694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.583834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.583868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.584146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.584181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.584350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.584383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.584591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.584643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.584864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.584899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.585043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.585078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.585367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.585447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.585650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.585684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.585789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.585822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.586048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.586318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.586371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.586623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.586658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.586799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.586834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.587040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.587074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.587210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.587244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.587480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.587535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.587785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.587818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.587946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.587979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.588260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.588335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.588555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.588589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.588776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.588810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.588936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.588970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.589115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.589151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.589471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.589607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.589791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.589824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.589940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.589974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.590075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.590243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.590276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.590464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.590517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.590724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.590779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.591045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.591148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.591181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.759 qpair failed and we were unable to recover it. 00:36:12.759 [2024-12-07 01:03:28.591312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.759 [2024-12-07 01:03:28.591345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.591504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.591730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.591783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.591980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.592023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.592135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.592169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.592442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.592475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.592579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.592618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.592794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.592847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.593055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.593089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.593226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.593260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.593503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.593721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.593754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.593916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.593966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.594136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.594171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.594383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.594417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.594552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.594586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.594798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.594850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.595019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.595075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.595442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.595673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.595745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.595964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.596028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.596257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.596295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.596509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.596695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.596728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.596855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.596887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.596992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.597034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.597264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.597336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.597556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.597628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.597862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.597914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.598231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.598311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.598494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.598551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.598704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.598739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.598930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.599008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.599170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.599374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.599407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.599729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.599763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.760 [2024-12-07 01:03:28.599984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.760 [2024-12-07 01:03:28.600053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.760 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.600255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.600288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.600431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.600463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.600710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.600744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.600910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.600968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.601196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.601474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.601759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.601793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.601923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.601958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.602145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.602379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.602523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.602558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.602712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.602747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.602900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.602935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.603083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.603345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.603396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.603618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.603653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.603808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.603840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.604011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.604072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.604287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.604322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.604438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.604472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.604640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.604688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.604883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.604930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.605146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.605180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.605310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.605345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.605486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.605521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.605751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.605783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.605882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.605916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.606084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.606160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.606405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.606474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.606666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.606715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.761 [2024-12-07 01:03:28.606946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.761 [2024-12-07 01:03:28.606982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.761 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.607941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.607974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.608952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.609106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.609156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.609311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.609359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.609544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.609765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.609814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.610929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.610962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.611186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.611373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.611613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.611660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.611855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.611992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.612034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.612196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.612251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.612391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.612424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.612600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.612650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.612848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.612898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.613122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.613190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.613297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.613332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.613491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.613570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.613748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.613781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.613927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.613960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.762 qpair failed and we were unable to recover it. 00:36:12.762 [2024-12-07 01:03:28.614084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.762 [2024-12-07 01:03:28.614118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.614325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.614512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.614584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.614758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.614791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.614929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.614960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.615966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.616104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.616252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.616429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.616648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.616877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.616983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.617234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.617438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.617579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.617733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.617929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.618818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.618979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.619047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.619244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.619302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.619455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.619516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.619707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.619741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.619915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.619975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.620148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.620202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.620459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.620494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.620640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.620685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.620929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.763 [2024-12-07 01:03:28.621152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.763 [2024-12-07 01:03:28.621186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.763 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.621334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.621366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.621465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.621499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.621671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.621807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.621855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.622106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.622179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.622412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.622461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.622655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.622688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.622788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.622821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.622961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.623134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.623283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.623422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.623588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.623870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.623904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.624034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.624068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.624184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.624215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.624383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.624648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.624849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.624897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.625145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.625217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.625436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.625506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.625717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.625753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.625872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.625914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.626080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.626114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.626253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.626287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.626511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.626581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.626798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.626831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.626977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.627165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.627331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.627477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.627880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.627912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.628051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.628248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.628283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.628434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.628482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.628648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.628697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.764 [2024-12-07 01:03:28.628924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.764 [2024-12-07 01:03:28.628973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.764 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.629287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.629475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.629522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.629722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.629756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.629920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.630109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.630156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.630330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.630585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.630649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.630865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.630912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.631056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.631102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.631325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.631374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.631532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.631578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.631812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.632017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.632069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.632306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.632471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.632504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.632672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.632727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.632870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.632904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.633939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.633985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.634189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.634223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.634351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.634391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.634524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.634558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.634709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.634755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.634909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.634958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.635159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.635206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.635361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.635393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.635558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.635590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.635733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.635766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.635964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.636002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.636114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.636148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.636282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.636315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.636418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.636452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.765 qpair failed and we were unable to recover it. 00:36:12.765 [2024-12-07 01:03:28.636584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.765 [2024-12-07 01:03:28.636617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.636712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.636745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.636859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.636892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.636983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.637890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.637922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.638033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.638067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.638191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.638237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.638453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.638499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.638678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.638733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.638843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.638877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.639078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.639113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.639262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.639296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.639467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.639512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.639691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.639738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.639892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.639945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.640056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.640091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.640302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.640347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.640489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.640535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.640681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.640725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.640894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.641073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.641120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.641294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.641338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.641554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.641599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.641811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.641863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.642040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.642087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.642272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.642344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.642512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.642556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.642690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.766 [2024-12-07 01:03:28.642737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.766 qpair failed and we were unable to recover it. 00:36:12.766 [2024-12-07 01:03:28.642919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.642967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.643134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.643180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.643359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.643410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.643580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.643815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.643848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.643968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.644008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.644157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.644325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.644369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.644544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.644589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.644783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.644837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.645018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.645054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.645204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.645252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.645447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.645646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.645692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.645846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.645892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.646132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.646260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.646307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.646453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.646501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.646683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.646728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.646879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.646924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.647082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.647128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.647318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.647354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.647494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.647528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.647687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.647900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.647945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.648133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.648179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.648375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.648422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.648606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.648790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.648836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.649864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.649899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.650011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.650077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.650242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.650312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.650474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.767 [2024-12-07 01:03:28.650541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.767 qpair failed and we were unable to recover it. 00:36:12.767 [2024-12-07 01:03:28.650763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.650830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.650972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.651031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.651212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.651512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.651578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.651805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.651873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.652135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.652171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.652277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.652312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.652530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.652597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.652749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.652805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.652913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.653140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.653208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.653454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.653519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.653741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.653788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.653962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.654028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.654260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.654294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.654436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.654471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.654696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.654768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.654911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.654959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.655208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.655247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.655356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.655390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.655566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.655600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.655740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.655774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.655893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.655926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.656046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.656080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.656226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.656267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.656413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.656447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.656624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.656689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.656929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.656978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.657186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.657232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.657421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.657498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.657702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.657774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.657961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.658017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.658197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.658269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.658458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.658527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.658720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.658767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.658964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.659020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.659248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.768 [2024-12-07 01:03:28.659565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.768 [2024-12-07 01:03:28.659642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.768 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.659797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.659843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.659983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.660044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.660235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.660307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.660541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.660608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.660767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.660813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.660991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.661069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.661248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.661477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.661524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.661693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.661740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.661927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.661972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.662135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.662360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.662407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.662637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.662702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.662894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.662941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.663107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.663155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.663344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.663396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.663592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.663638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.663794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.663842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.664042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.664089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.664309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.664376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.664590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.664661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.664845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.664892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.665121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.665188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.665475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.665521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.665689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.665735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.665951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.666005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.666197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.666269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.666479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.666550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.666765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.666811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.667020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.667066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.667258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.667324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.667563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.667627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.667843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.667889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.668036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.668084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.668308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.668405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.668685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.668755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.669027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.769 [2024-12-07 01:03:28.669094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.769 qpair failed and we were unable to recover it. 00:36:12.769 [2024-12-07 01:03:28.669369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.669435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.669705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.669768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.670030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.670077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.670339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.670404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.670615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.670679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.670964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.671053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.671243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.671288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.671464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.671528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.671784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.671848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.672068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.672116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.672280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.672326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.672650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.672904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.672971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.673171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.673218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.673759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.673824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.674069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.674116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.674258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.674330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.674610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.674674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.675084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.675235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.675281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.675545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.675590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.675818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.675881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.676097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.676144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.676333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.676379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.676628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.676691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.676905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.676971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.677213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.677259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.677435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.677502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.677788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.677863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.678092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.678140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.678363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.678427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.678644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.770 [2024-12-07 01:03:28.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.770 qpair failed and we were unable to recover it. 00:36:12.770 [2024-12-07 01:03:28.678973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.679027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.679184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.679230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.679425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.679489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.679746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.679810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.680105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.680153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.680362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.680507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.680552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.680732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.680812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.681016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.681088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.681243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.681288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.681520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.681593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.681785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.681850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.682063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.682110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.682251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.682324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.682611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.682675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.682887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.682952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.683224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.683270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.683493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.683557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.683805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.683869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.684131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.684178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.684322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.684368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.684591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.684870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.684916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.685091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.685138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.685318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.685547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.685616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.685888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.685952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.686153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.686200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.686455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.686519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.686822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.687073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.687121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.687270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.687340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.687635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.687699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.687965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.688053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.688279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.688359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.688644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.688707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.688968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.689066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.689313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.771 [2024-12-07 01:03:28.689378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.771 qpair failed and we were unable to recover it. 00:36:12.771 [2024-12-07 01:03:28.689595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.689658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.689980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.690074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.690325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.690391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.690696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.690770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.691027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.691074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.691259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.691329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.691575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.691621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.691872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.691937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.692252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.692326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.692595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.692659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.692872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.692938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.693251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.693324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.693550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.693614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.693864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.693929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.694169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.694235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.694593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.694804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.694871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.695137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.695204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.695415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.695480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.695771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.695836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.696134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.696211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.696458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.696522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.696772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.696836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.697048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.697116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.697379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.697444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.697749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.697813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.698059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.698125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.698393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.698458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.698705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.698772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.699063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.699129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.699418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.699735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.699800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.700044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.700309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.700375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.700656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.700974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.701055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.701287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.701352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.772 qpair failed and we were unable to recover it. 00:36:12.772 [2024-12-07 01:03:28.701646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.772 [2024-12-07 01:03:28.701710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.701967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.702058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.702318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.702383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.702597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.702663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.702882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.702947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.703175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.703241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.703455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.703521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.703815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.703880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.704132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.704199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.704420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.704485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.704736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.704801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.705020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.705088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.705387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.705451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.705727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.705791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.706091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.706158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.706428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.706493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.706750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.706823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.707082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.707149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.707370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.707438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.707704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.707769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.708069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.708144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.708424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.708490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.708750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.708815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.709062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.709127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.709480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.709726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.709793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.710024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.710090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.710312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.710377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.710676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.710742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.711035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.711121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.711416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.711696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.711761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.712022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.712088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.712408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.712656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.712720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.713017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.713082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.713328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.713393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.713604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.773 [2024-12-07 01:03:28.713671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.773 qpair failed and we were unable to recover it. 00:36:12.773 [2024-12-07 01:03:28.713968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.714069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.714358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.714423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.714632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.714895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.714968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.715277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.715343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.715587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.715650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.715919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.715983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.716242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.716307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.716601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.716667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.716923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.716987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.717299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.717364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.717652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.717716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.717982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.718064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.721200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.721266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.721517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.721585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.721796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.721860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.722081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.722150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.722432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.722497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.722740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.722807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.723108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.723173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.723454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.723519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.723764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.723829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.724073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.724140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.724479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.724763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.724826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.725078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.725143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.725402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.725466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.725709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.726041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.726107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.726325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.726390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.726684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.726748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.727018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.727085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.727296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.727361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.727559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.727623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.727907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.727971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.728286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.728352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.728569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.728633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.728840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.774 [2024-12-07 01:03:28.728904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.774 qpair failed and we were unable to recover it. 00:36:12.774 [2024-12-07 01:03:28.729218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.729296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.729556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.729620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.729919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.730034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.730338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.730402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.730651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.730716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.730984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.731091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.731362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.731426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.731715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.731779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.732133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.732396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.732460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.732665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.732732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.732990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.733073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.733376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.733452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.733742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.733807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.734070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.734137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.734427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.734491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.734739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.734804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.735046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.735359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.735423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.735684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.735757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.736017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.736083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.736382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.736456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.736694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.736759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.737016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.737084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.737306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.737370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.737612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.737676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.737962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.738064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.738312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.738376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.738611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.738679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.738974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.739054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.739306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.739370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.739613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.775 [2024-12-07 01:03:28.739677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.775 qpair failed and we were unable to recover it. 00:36:12.775 [2024-12-07 01:03:28.739982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.740075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.740376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.740441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.740683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.740750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.741052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.741129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.741422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.741487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.741799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.742050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.742118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.742502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.742738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.742803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.743098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.743175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.743466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.743530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.743769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.743834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.744121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.744186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.744439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.744515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.744777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.744843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.745091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.745159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.745432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.745497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.745687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.745751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.746047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.746114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.746468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.746679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.746742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.747044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.747120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.747426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.747491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.747782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.747845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.748086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.748152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.748441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.748506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.748792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.748856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.749154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.749221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.749480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.749544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.749829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.749892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.750323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.750392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.750661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.750725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.751101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.751331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.751397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.751692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.751757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.752060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.752124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.752378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.752441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.776 [2024-12-07 01:03:28.752684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.776 [2024-12-07 01:03:28.752748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.776 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.753048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.753114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.753403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.753467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.753675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.753741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.753988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.754084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.754335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.754402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.754665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.754730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.754984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.755064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.755314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.755378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.755641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.755704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.755900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.755963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.756337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.756586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.756650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.756912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.756977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.757290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.757355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.757564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.757628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.757866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.757941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.758215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.758281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.758522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.758588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.758896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.758967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.759273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.759337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.759620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.759684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.759970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.760307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.760373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.760668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.760733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.760929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.761013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.761274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.761339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.761641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.761716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.761978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.762059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.762300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.762366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.762591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.762658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.762918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.762983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.763261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.763326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.763608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.763673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.763968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.764049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.764300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.764367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.764682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.764972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.765060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.777 [2024-12-07 01:03:28.765317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.777 [2024-12-07 01:03:28.765382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.777 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.765673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.765737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.766059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.766316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.766350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.766525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.766559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.766786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.766852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.767120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.767154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.767263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.767316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.767495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.767530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.767644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.767681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.767825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.767861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.768007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.768058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.768164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.768316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.768351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.768534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.768602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.768899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.768975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.769206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.769240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.769376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.769412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.769569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.769610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.769837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.769903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.770105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.770317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.770619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.770695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.770936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.771026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.771235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.771462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.771497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.771642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.771705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.772007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.772079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.772203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.772235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.772415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.772479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.772718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.772781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.773074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.773108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.773250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.773316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.773584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.773649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.773893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.773959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.774124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.774160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.774308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.774565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.778 [2024-12-07 01:03:28.774600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.778 qpair failed and we were unable to recover it. 00:36:12.778 [2024-12-07 01:03:28.774752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.774786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.774940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.774975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.775185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.775219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.775544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.775794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.775858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.776132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.776167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.776272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.776322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.776429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.776465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.776699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.776734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.776848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.776883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.777085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.777120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.777222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.777256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.777472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.777507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.777681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.777735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.777948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.778186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.778362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.778547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.778702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.778846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.778881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.779084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.779232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.779423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.779630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.779801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.779952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.780006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.780146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.780180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.780367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.780430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.780682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.780749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.780981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.781022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.781125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.781158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.781465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.781499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.779 [2024-12-07 01:03:28.781639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.779 [2024-12-07 01:03:28.781673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.779 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.781815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.782136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.782286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.782452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.782596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.782827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.782891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.783921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.783954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.784132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.784186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.784328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.784380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.784558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.784595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.784704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.784739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.784909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.785925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.785960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.786122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.786157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.786261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.786306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.786481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.786521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.786630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.786666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.786923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.786988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.787213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.787247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.787417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.787451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.787581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.787615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.787736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.787772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.787912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.787948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.788103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.788138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.788243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.788277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.788439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.780 [2024-12-07 01:03:28.788475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.780 qpair failed and we were unable to recover it. 00:36:12.780 [2024-12-07 01:03:28.788635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.788670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.788843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.788880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.788990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.789151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.789300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.789443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.789643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.789821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.789856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.790023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.790064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.790175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.790209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.790328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.790363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.790505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.790540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.790782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.790847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.791941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.791976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.792959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.792992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.793903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.793937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.794109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.794261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.794468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.794654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.794840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.794977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.795019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.795128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.781 [2024-12-07 01:03:28.795162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.781 qpair failed and we were unable to recover it. 00:36:12.781 [2024-12-07 01:03:28.795310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.795345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.795486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.795520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.795635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.795671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.795813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.795848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.796870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.796903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.797945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.797979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.798931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.798966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.799852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.799989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.800183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.800356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.800573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.800746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.800888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.800922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.801072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.801108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.801211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.801364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.782 [2024-12-07 01:03:28.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.782 qpair failed and we were unable to recover it. 00:36:12.782 [2024-12-07 01:03:28.801562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.801597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.801734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.801768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.801876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.801912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.802881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.802915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.803881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.803913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.804848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.804881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.805830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.805865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.806951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.806988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.807141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.807177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.807284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.807329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.807532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.807675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.807709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.783 qpair failed and we were unable to recover it. 00:36:12.783 [2024-12-07 01:03:28.807816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.783 [2024-12-07 01:03:28.807851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.807980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.808925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.808960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.809098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.809134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.809269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.809305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.809447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.809482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.809630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.809665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.809886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.809953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.810138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.810174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.810293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.810328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.810442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.810477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.810657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.810831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.810868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.811827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.811861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.812847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.812881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.813038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.813073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.813181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.813215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.813334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.784 [2024-12-07 01:03:28.813369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.784 qpair failed and we were unable to recover it. 00:36:12.784 [2024-12-07 01:03:28.813540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.813575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.813676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.813710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.813854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.813888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.814910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.814944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.815992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.816160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.816312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.816736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.816861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.816976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.817936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.817970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.818091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.818127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.818229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.818439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.818473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.818790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.818854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.819075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.819117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.819243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.819295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.819478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.819515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.819682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.819715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.819991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.820046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.820156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.820190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.820307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.785 [2024-12-07 01:03:28.820342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.785 qpair failed and we were unable to recover it. 00:36:12.785 [2024-12-07 01:03:28.820479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.820514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.820803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.821066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.821101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.821232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.821267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.821375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.821409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.821591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.821646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.821949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.822015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.822188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.822310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.822342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.822599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.822926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.823151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.823276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.823311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.823480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.823517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.823690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.823762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.823987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.824029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.824169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.824203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.824385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.824419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.824522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.824581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.824824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.824859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.825086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.825122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.825237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.825269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.825376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.825410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.825573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.825607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.825827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.825886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.826081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.826116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.826226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.826260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.826409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.826443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.826581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.826614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.826823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.826873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.827903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.828079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.828128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.786 [2024-12-07 01:03:28.828241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.786 [2024-12-07 01:03:28.828279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.786 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.828383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.828418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.828562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.828598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.828821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.828857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.829129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.829275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.829411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.829614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.829817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.829961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.830148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.830323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.830497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.830638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.830816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.830852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.831921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.831956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.832936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.832967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.833924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.833976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.834153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.834190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.834333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.834368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.834561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.834625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.834819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.834882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.835127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.835163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.835309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.787 [2024-12-07 01:03:28.835344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.787 qpair failed and we were unable to recover it. 00:36:12.787 [2024-12-07 01:03:28.835461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.835495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.835719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.835781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.836880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.836914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.837900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.837936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.838178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.838360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.838506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.838829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.838979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.839163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.839368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.839554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.839711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.839891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.840084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.840126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.840264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.840298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.840446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.840481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.840689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.840724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.840877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.840910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.841083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.841116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.841300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.841472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.841506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.841644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.841678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.841844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.841879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.842016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.842068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.842216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.842252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.842481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.788 [2024-12-07 01:03:28.842516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.788 qpair failed and we were unable to recover it. 00:36:12.788 [2024-12-07 01:03:28.842660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.842720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.842923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.842958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.843142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.843178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.843315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.843349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.843539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.843574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.843703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.843737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.843873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.843907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.844069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.844103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.844212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.844247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.844485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.844545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.844750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.844812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.845050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.845086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.845228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.845263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 414807 Killed "${NVMF_APP[@]}" "$@" 00:36:12.789 [2024-12-07 01:03:28.845368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.845410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.845565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.845601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:12.789 [2024-12-07 01:03:28.845770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.845813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.845949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:12.789 [2024-12-07 01:03:28.845984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.846115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.846147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.789 [2024-12-07 01:03:28.846275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.846306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.789 [2024-12-07 01:03:28.846432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.846464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.789 [2024-12-07 01:03:28.846661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.846836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.846870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.847904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.848099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.848131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.848227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.848258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.848395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.789 [2024-12-07 01:03:28.848428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.789 qpair failed and we were unable to recover it. 00:36:12.789 [2024-12-07 01:03:28.848580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.848614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.848726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.848776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.848945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.848980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.849122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.849155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.849330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.849363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.849521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.849556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.849684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.849717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.849870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.849909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.850043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.850078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.850185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.850216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.850361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.850400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.850545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.850589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.850756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=415300 00:36:12.790 [2024-12-07 01:03:28.850792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:12.790 [2024-12-07 01:03:28.850947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 415300 00:36:12.790 [2024-12-07 01:03:28.850980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.851108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 415300 ']' 00:36:12.790 [2024-12-07 01:03:28.851292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.851327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.790 [2024-12-07 01:03:28.851502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.851541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.790 [2024-12-07 01:03:28.851650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.851683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.790 [2024-12-07 01:03:28.851839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.851873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 01:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.790 [2024-12-07 01:03:28.852071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.852237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.852382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.852679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.852793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.852827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.853879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.853907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.854029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.854059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.854168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.854192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.790 [2024-12-07 01:03:28.854287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.790 [2024-12-07 01:03:28.854312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.790 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.854412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.854437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.854528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.854554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.854644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.854670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.854786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.854811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.854951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.854988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.855957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.856893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.856918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.857869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.857906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.858926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.858953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.859081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.791 [2024-12-07 01:03:28.859107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.791 qpair failed and we were unable to recover it. 00:36:12.791 [2024-12-07 01:03:28.859191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.859906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.859931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.860907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.860932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.861950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.861976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.862899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.862985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.792 [2024-12-07 01:03:28.863689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.792 [2024-12-07 01:03:28.863721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.792 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.863860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.863898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.863993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.864947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.864982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.865927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.865955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.866952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.866977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.793 [2024-12-07 01:03:28.867658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.793 [2024-12-07 01:03:28.867687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.793 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.867774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.867917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.867944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.868957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.868985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.869936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.869983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.870898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.870987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.871910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.871937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.872040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.872065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.872158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.872183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.872273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.872297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.794 qpair failed and we were unable to recover it. 00:36:12.794 [2024-12-07 01:03:28.872386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.794 [2024-12-07 01:03:28.872412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.872498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.872522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.872613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.872638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.872742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.872767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.872893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.872931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.873964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.873990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.874948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.874985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.875899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.875925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.876021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.876048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.876131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.876156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.876240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:12.795 [2024-12-07 01:03:28.876351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.795 [2024-12-07 01:03:28.876377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:12.795 qpair failed and we were unable to recover it. 00:36:13.078 [2024-12-07 01:03:28.876493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.078 [2024-12-07 01:03:28.876519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.078 qpair failed and we were unable to recover it. 00:36:13.078 [2024-12-07 01:03:28.876608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.078 [2024-12-07 01:03:28.876638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.876730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.876756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.876844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.876871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.876959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.876983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.877909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.877983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.878896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.878980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.879931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.879957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.079 [2024-12-07 01:03:28.880653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.079 [2024-12-07 01:03:28.880678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.079 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.880759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.880785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.880863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.880889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.881887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.881987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.882963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.882988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.883936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.883961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.884877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.884905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.885006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.885033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.885123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.080 [2024-12-07 01:03:28.885150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.080 qpair failed and we were unable to recover it. 00:36:13.080 [2024-12-07 01:03:28.885228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.885892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.885931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.886882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.886907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.887935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.887960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.888881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.888984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.889018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.889100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.889126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.889206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.889231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.889320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.081 [2024-12-07 01:03:28.889346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.081 qpair failed and we were unable to recover it. 00:36:13.081 [2024-12-07 01:03:28.889426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.889534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.889560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.889648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.889674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.889784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.889881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.889926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.890895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.890934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.891909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.891933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.892892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.892991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.082 [2024-12-07 01:03:28.893863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.082 qpair failed and we were unable to recover it. 00:36:13.082 [2024-12-07 01:03:28.893947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.893971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.894943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.894968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.895895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.895920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.896989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.897086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.897113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.897197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.897226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.897306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.897333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.083 [2024-12-07 01:03:28.897442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.083 [2024-12-07 01:03:28.897471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.083 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.897562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.897589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.897711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.897742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.897828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.897936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.897961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.898903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.898931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.899942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.899969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.900950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.900964] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:36:13.084 [2024-12-07 01:03:28.901034] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.084 [2024-12-07 01:03:28.901054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.084 [2024-12-07 01:03:28.901944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.084 [2024-12-07 01:03:28.901969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.084 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.902942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.903871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.903909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.904966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.904991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.905958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.905984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.906097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.906124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.906206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.906235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.906323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.906349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.906438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.906466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.085 [2024-12-07 01:03:28.906583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.085 [2024-12-07 01:03:28.906610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.085 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.906692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.906718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.906801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.906827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.906967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.907910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.907937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.908929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.908954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.909919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.909944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.910904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.910931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.086 qpair failed and we were unable to recover it. 00:36:13.086 [2024-12-07 01:03:28.911014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.086 [2024-12-07 01:03:28.911042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.911959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.911986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.912901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.912979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.913941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.913967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.914950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.914976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.087 qpair failed and we were unable to recover it. 00:36:13.087 [2024-12-07 01:03:28.915823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.087 [2024-12-07 01:03:28.915863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.915985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.916887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.916974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.917944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.917971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.918959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.918985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.088 [2024-12-07 01:03:28.919796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.088 qpair failed and we were unable to recover it. 00:36:13.088 [2024-12-07 01:03:28.919915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.919944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.920876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.920904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.921915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.921942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.922879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.089 [2024-12-07 01:03:28.923898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.089 [2024-12-07 01:03:28.923925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.089 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.924917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.924944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.925907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.925934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.926867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.926893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.090 [2024-12-07 01:03:28.927947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.090 [2024-12-07 01:03:28.927979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.090 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.928896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.928925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.929907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.929934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.930911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.930938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.931917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.931945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.932931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.932960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.933063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.091 [2024-12-07 01:03:28.933090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.091 qpair failed and we were unable to recover it. 00:36:13.091 [2024-12-07 01:03:28.933198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.933311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.933455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.933597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.933769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.933886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.933914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.934893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.935914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.935940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.936896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.936982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.937895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.937978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.938012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.938095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.938122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.938238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.092 [2024-12-07 01:03:28.938264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.092 qpair failed and we were unable to recover it. 00:36:13.092 [2024-12-07 01:03:28.938350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.938377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.938488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.938606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.938633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.938732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.938772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.938895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.938928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.939932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.939973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.940909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.941916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.941956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.093 [2024-12-07 01:03:28.942790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.093 [2024-12-07 01:03:28.942816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.093 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.942931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.943919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.943946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.944947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.944974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.945923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.945950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.946937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.946964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.947895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.947922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.094 [2024-12-07 01:03:28.948035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.094 [2024-12-07 01:03:28.948062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.094 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.948855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.948882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.949905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.949932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.950903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.950930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.951821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.951966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.952966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.952993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.953081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.095 [2024-12-07 01:03:28.953200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.095 [2024-12-07 01:03:28.953228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.095 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.953317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.953345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.953430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.953456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.953606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.953637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.953759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.953786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.953872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.953902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.954865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.954976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.955829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.955858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.956864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.956891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.957845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.957973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.958012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.958109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.958136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.958253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.958281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.096 [2024-12-07 01:03:28.958394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.096 [2024-12-07 01:03:28.958421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.096 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.958562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.958588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.958733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.958760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.958877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.958905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.959872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.959988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.960940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.960969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.961902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.961990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.962936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.962964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.963094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.097 [2024-12-07 01:03:28.963121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.097 qpair failed and we were unable to recover it. 00:36:13.097 [2024-12-07 01:03:28.963236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.963524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.963636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.963772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.963933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.963961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.964882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.964969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.965835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.966830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.966857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.967915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.098 [2024-12-07 01:03:28.967944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.098 qpair failed and we were unable to recover it. 00:36:13.098 [2024-12-07 01:03:28.968035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.968921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.968948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.969961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.969987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.970912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.971917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.971946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.099 [2024-12-07 01:03:28.972722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.099 [2024-12-07 01:03:28.972763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.099 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.972910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.972938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.973946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.973974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.974852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.974881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.975963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.975990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.976919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.976945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.100 [2024-12-07 01:03:28.977853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.100 [2024-12-07 01:03:28.977894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.100 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.978896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.978924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.979839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.979867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.980941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.980968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.981934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.981962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.982885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.101 [2024-12-07 01:03:28.982975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.101 [2024-12-07 01:03:28.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.101 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.983927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.983953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.984837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.984878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:13.102 [2024-12-07 01:03:28.985470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.985973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.986905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.986945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.987078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.987107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.987199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.987226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.987337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.987364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.102 [2024-12-07 01:03:28.987476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.102 [2024-12-07 01:03:28.987503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.102 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.987632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.987672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.987792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.987819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.987920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.987960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.988933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.988960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.989893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.989988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.990832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.990959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.991947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.991976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.103 qpair failed and we were unable to recover it. 00:36:13.103 [2024-12-07 01:03:28.992801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.103 [2024-12-07 01:03:28.992842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.992944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.992973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.993940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.993967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.994923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.995896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.995982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.996949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.996976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.104 [2024-12-07 01:03:28.997814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.104 qpair failed and we were unable to recover it. 00:36:13.104 [2024-12-07 01:03:28.997931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.997959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.998884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.998992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:28.999942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:28.999983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.000947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.001892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.001920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.002857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.002961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.105 [2024-12-07 01:03:29.003008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.105 qpair failed and we were unable to recover it. 00:36:13.105 [2024-12-07 01:03:29.003108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.003860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.003986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.004913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.004953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.005956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.005984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.006898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.006926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.007931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.007958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.008053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.008080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.106 [2024-12-07 01:03:29.008193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.106 [2024-12-07 01:03:29.008233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.106 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.008965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.008993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.009924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.009957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.010945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.010985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.011827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.011979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.107 [2024-12-07 01:03:29.012837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.107 [2024-12-07 01:03:29.012866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.107 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.012975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.013882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.013909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.014902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.014986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.015871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.015912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.016851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.016891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.108 [2024-12-07 01:03:29.017885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.108 [2024-12-07 01:03:29.017926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.108 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.018918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.019848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.019984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.020948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.021871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.021912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.109 [2024-12-07 01:03:29.022952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.109 [2024-12-07 01:03:29.022980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.109 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.023940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.023967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.024929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.024959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.025852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.025880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.026909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.026936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.110 [2024-12-07 01:03:29.027898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.110 [2024-12-07 01:03:29.027927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.110 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.028950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.028977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.029919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.029950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.030955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.030983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.031897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.031927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.032091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.032232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.032363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.032477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.111 [2024-12-07 01:03:29.032615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.111 qpair failed and we were unable to recover it. 00:36:13.111 [2024-12-07 01:03:29.032691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.032718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.032824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.032865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.032985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.033902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.033989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.034932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.034960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.112 [2024-12-07 01:03:29.035726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.112 [2024-12-07 01:03:29.035741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.112 [2024-12-07 01:03:29.035753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.112 [2024-12-07 01:03:29.035762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.112 [2024-12-07 01:03:29.035772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.035895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.035920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.036879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.036972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.037122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.112 [2024-12-07 01:03:29.037149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.112 qpair failed and we were unable to recover it. 00:36:13.112 [2024-12-07 01:03:29.037244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.037386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:13.113 [2024-12-07 01:03:29.037413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.037373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:13.113 [2024-12-07 01:03:29.037401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:13.113 [2024-12-07 01:03:29.037504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:13.113 [2024-12-07 01:03:29.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.037890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.037917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.038897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.038969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.039902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.039929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.040977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.041104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.041223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.041330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.041482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.113 [2024-12-07 01:03:29.041601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.113 [2024-12-07 01:03:29.041627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.113 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.041702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.041728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.041834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.041861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.041938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.041964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.042892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.043945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.043972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.044959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.044986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.046007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.046035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.046153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.046180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.046263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.046291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.114 [2024-12-07 01:03:29.046375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.114 [2024-12-07 01:03:29.046403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.114 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.046517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.046544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.046646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.046674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.046783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.046810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.046906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.046945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.047886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.047914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.048932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.048961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.049934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.049960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.050941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.051024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.051054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.051139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.051167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.115 qpair failed and we were unable to recover it. 00:36:13.115 [2024-12-07 01:03:29.051258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.115 [2024-12-07 01:03:29.051285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.051394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.051492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.051519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.051667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.051708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.051812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.051841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.051933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.051962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.052918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.053886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.053971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.054894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.054923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.055016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.055044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.055174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.055201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.055282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.055308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.116 [2024-12-07 01:03:29.055419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.116 [2024-12-07 01:03:29.055446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.116 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.055535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.055563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.055660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.055689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.055811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.055839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.055922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.055950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.056906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.056934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.057934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.057975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.058873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.058967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.059944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.059984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.060127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.117 qpair failed and we were unable to recover it. 00:36:13.117 [2024-12-07 01:03:29.060220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.117 [2024-12-07 01:03:29.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.060932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.060960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.061914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.061942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.062914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.062941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.063952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.063979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.118 [2024-12-07 01:03:29.064811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.118 qpair failed and we were unable to recover it. 00:36:13.118 [2024-12-07 01:03:29.064900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.064927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.065963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.065990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.066946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.066973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.067897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.067993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.068880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.068974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.119 [2024-12-07 01:03:29.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.119 qpair failed and we were unable to recover it. 00:36:13.119 [2024-12-07 01:03:29.069720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.069748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.069867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.069907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.070965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.070992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.071963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.072839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.072879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.073874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.073915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.074034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.074064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.120 qpair failed and we were unable to recover it. 00:36:13.120 [2024-12-07 01:03:29.074157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.120 [2024-12-07 01:03:29.074185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.074883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.074972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.075880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.076930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.076959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.077965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.077993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.078110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.078150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.078249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.078278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.078400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.078429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.078514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.078541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.121 qpair failed and we were unable to recover it. 00:36:13.121 [2024-12-07 01:03:29.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.121 [2024-12-07 01:03:29.078696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.078805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.078832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.078915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.078942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.079918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.079945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.080903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.080984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.081895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.081979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.082898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.082978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.083019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.083104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.083132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.083219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.083246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.083360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.122 [2024-12-07 01:03:29.083388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.122 qpair failed and we were unable to recover it. 00:36:13.122 [2024-12-07 01:03:29.083497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.083525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.083638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.083782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.083809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.083889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.083915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.084873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.084901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.085909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.085950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.086897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.086925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.087946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.087973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.088092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.088231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.088347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.088481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.123 [2024-12-07 01:03:29.088598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.123 qpair failed and we were unable to recover it. 00:36:13.123 [2024-12-07 01:03:29.088687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.088715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.088800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.088829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.088919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.088946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.089902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.089982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.090953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.090980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.091893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.091920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.092944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.092970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.093091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.093177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.093206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.093290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.093318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.093402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.124 [2024-12-07 01:03:29.093430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.124 qpair failed and we were unable to recover it. 00:36:13.124 [2024-12-07 01:03:29.093515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.093543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.093637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.093664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.093769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.093797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.093875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.093904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.094897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.094925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.095897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.095981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.096845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.096872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.125 qpair failed and we were unable to recover it. 00:36:13.125 [2024-12-07 01:03:29.097617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.125 [2024-12-07 01:03:29.097645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.097757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.097784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.097865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.097893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.097973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.098952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.098979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.099927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.099963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.100932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.100960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.126 [2024-12-07 01:03:29.101745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.126 qpair failed and we were unable to recover it. 00:36:13.126 [2024-12-07 01:03:29.101861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.101888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.101971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.102923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.102951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.103952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.103979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.104869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.104983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.105912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.105940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.106900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.106975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.107009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.107102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.107129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.127 qpair failed and we were unable to recover it. 00:36:13.127 [2024-12-07 01:03:29.107214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.127 [2024-12-07 01:03:29.107242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.107955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.107992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.108957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.108986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.109876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.109903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.110926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.110954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.111887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.111914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.112033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.112150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.112266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.112377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.128 [2024-12-07 01:03:29.112489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.128 qpair failed and we were unable to recover it. 00:36:13.128 [2024-12-07 01:03:29.112573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.112600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.112683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.112709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.112789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.112816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.112892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.112919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.113945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.113971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.114910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.114937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.115913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.115941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.116887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.116913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.129 [2024-12-07 01:03:29.117011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.129 [2024-12-07 01:03:29.117039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.129 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.117958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.118965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.118991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.119885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.119975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.120933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.120959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.121856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.121969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.122090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.122195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.122334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.122471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.130 [2024-12-07 01:03:29.122584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.130 [2024-12-07 01:03:29.122612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.130 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.122700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.122727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.122819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.122846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.122934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.122961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.123955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.123982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.124886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.124970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.125885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.125912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.126956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.127957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.131 [2024-12-07 01:03:29.127985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.131 qpair failed and we were unable to recover it. 00:36:13.131 [2024-12-07 01:03:29.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.128922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.128948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.129904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.129932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.130952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.130979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.131886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.131913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.132964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.132990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.133087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.133115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.133196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.132 [2024-12-07 01:03:29.133223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.132 qpair failed and we were unable to recover it. 00:36:13.132 [2024-12-07 01:03:29.133329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.133446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.133553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.133660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.133771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.133915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.133942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.134946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.134972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.135900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.135991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.136899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.136927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.133 [2024-12-07 01:03:29.137745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.133 qpair failed and we were unable to recover it. 00:36:13.133 [2024-12-07 01:03:29.137827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.137853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.137943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.137972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.138907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.138939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.139958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.139985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.140888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.140979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.141906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.141933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.134 [2024-12-07 01:03:29.142837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.134 qpair failed and we were unable to recover it. 00:36:13.134 [2024-12-07 01:03:29.142927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.142959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.143897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.144885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.144966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.145869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.145965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.146933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.146960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.135 [2024-12-07 01:03:29.147823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.135 qpair failed and we were unable to recover it. 00:36:13.135 [2024-12-07 01:03:29.147902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.147929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.148942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.148969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.149917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.149944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.150884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.150913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.151904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.151982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.152853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.152969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.153100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.136 [2024-12-07 01:03:29.153127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.136 qpair failed and we were unable to recover it. 00:36:13.136 [2024-12-07 01:03:29.153211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.153920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.153947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.154941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.154969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.155871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.155911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.156964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.156991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.137 [2024-12-07 01:03:29.157796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.137 qpair failed and we were unable to recover it. 00:36:13.137 [2024-12-07 01:03:29.157882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.157907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.158878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.158966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.159944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.159974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.160894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.160981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.161888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.161984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.162921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.138 [2024-12-07 01:03:29.162948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.138 qpair failed and we were unable to recover it. 00:36:13.138 [2024-12-07 01:03:29.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.163895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.163976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.164928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.165947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.165974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.166897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.166984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.167885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.167913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.168009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.139 [2024-12-07 01:03:29.168038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.139 qpair failed and we were unable to recover it. 00:36:13.139 [2024-12-07 01:03:29.168122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.168228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.168330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.140 [2024-12-07 01:03:29.168433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:13.140 [2024-12-07 01:03:29.168556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.140 [2024-12-07 01:03:29.168675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.140 [2024-12-07 01:03:29.168787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.140 [2024-12-07 01:03:29.168898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.168927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.169918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.169944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.170914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.170944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.171944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.171971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.140 [2024-12-07 01:03:29.172807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.140 qpair failed and we were unable to recover it. 00:36:13.140 [2024-12-07 01:03:29.172892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.172920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.173904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.173931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.174901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.174986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.175882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.175910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.176896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.176976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.177009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.177087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.177195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.141 [2024-12-07 01:03:29.177223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.141 qpair failed and we were unable to recover it. 00:36:13.141 [2024-12-07 01:03:29.177302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.177946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.178893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.178976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.179898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.179979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.180904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.180931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.181913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.181940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.182066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.182204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.142 [2024-12-07 01:03:29.182231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.142 qpair failed and we were unable to recover it. 00:36:13.142 [2024-12-07 01:03:29.182313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.182432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.182540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.182642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.182792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.182899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.182925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.183899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.183926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.184891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.184976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.185949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.185975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.186928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.186956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.187050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.187077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.187168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.187196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.187291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.143 [2024-12-07 01:03:29.187321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.143 qpair failed and we were unable to recover it. 00:36:13.143 [2024-12-07 01:03:29.187405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.187433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.187510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.187537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.187623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.187650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.187746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.187772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.187857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.187885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.187971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.144 [2024-12-07 01:03:29.188766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.188793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.144 [2024-12-07 01:03:29.188903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.188987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.144 [2024-12-07 01:03:29.189105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.144 [2024-12-07 01:03:29.189209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.189914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.189992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.190883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.190912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.191972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.144 [2024-12-07 01:03:29.192004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.144 qpair failed and we were unable to recover it. 00:36:13.144 [2024-12-07 01:03:29.192089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.192905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.192985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.193911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.193990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.194930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.194957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.195898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.195924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.196037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.196144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.196253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.196361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.145 [2024-12-07 01:03:29.196472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.145 qpair failed and we were unable to recover it. 00:36:13.145 [2024-12-07 01:03:29.196567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.196594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.196676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.196703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.196786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.196814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.196900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.196928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.197940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.197966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.198948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.198974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.199896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.199922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.200981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.146 [2024-12-07 01:03:29.201709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.146 qpair failed and we were unable to recover it. 00:36:13.146 [2024-12-07 01:03:29.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.201818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.201944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.201973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.202927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.202957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.203054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.203082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.203163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.203190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.147 [2024-12-07 01:03:29.203286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.147 [2024-12-07 01:03:29.203313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.147 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.203428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.203543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.203772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.203897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.203980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.204932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.204959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.205890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.205917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.206921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.206948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.410 [2024-12-07 01:03:29.207731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.410 [2024-12-07 01:03:29.207759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.410 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.207846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.207873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.207990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.208933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.208962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.209956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.209982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.210886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.210914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.211985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.212908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.212940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.213898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.411 [2024-12-07 01:03:29.213925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.411 qpair failed and we were unable to recover it. 00:36:13.411 [2024-12-07 01:03:29.214016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.214957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.214992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.215898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.215924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.216892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.216978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.217864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.217974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.218961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.219917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.219944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.220025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.220053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.220134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.412 [2024-12-07 01:03:29.220161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.412 qpair failed and we were unable to recover it. 00:36:13.412 [2024-12-07 01:03:29.220253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.220929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.220956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.221947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.221974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.222919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.222949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.223949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.223976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.224909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.225140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.225166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.225271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.225298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.225431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.413 qpair failed and we were unable to recover it. 00:36:13.413 [2024-12-07 01:03:29.225549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.413 [2024-12-07 01:03:29.225578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.225656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.225684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.225767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.225795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.225877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.225903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.226952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.227857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.227885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 Malloc0 00:36:13.414 [2024-12-07 01:03:29.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.228099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.228229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.414 [2024-12-07 01:03:29.228373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.228491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b9 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:13.414 0 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.228606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.414 [2024-12-07 01:03:29.228721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.414 [2024-12-07 01:03:29.228861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.228887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.228974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.229901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.230898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.230926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.414 qpair failed and we were unable to recover it. 00:36:13.414 [2024-12-07 01:03:29.231714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.414 [2024-12-07 01:03:29.231728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.414 [2024-12-07 01:03:29.231740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.231835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.231860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.231940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.231967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.232918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.232945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.233884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.234870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.234897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.235896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.235977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.236914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.236941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.237940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.237967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.238170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.238197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.238311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.238336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.415 qpair failed and we were unable to recover it. 00:36:13.415 [2024-12-07 01:03:29.238419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.415 [2024-12-07 01:03:29.238445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.238557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.238582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.238667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.238695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.238804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.238912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.239880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.239906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.416 [2024-12-07 01:03:29.239986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.416 [2024-12-07 01:03:29.240146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.416 [2024-12-07 01:03:29.240347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.416 [2024-12-07 01:03:29.240451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.240947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.240974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.241963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.241989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.242950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.242976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.243850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.243876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.416 qpair failed and we were unable to recover it. 00:36:13.416 [2024-12-07 01:03:29.244017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.416 [2024-12-07 01:03:29.244043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.244962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.244989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.245873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.245966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.246892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.246925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1530730 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.247882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.247967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.417 [2024-12-07 01:03:29.247999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.417 [2024-12-07 01:03:29.248203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.417 [2024-12-07 01:03:29.248319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.417 [2024-12-07 01:03:29.248432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.248902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.248982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.249915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.249942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.250037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.417 [2024-12-07 01:03:29.250064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.417 qpair failed and we were unable to recover it. 00:36:13.417 [2024-12-07 01:03:29.250148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.250910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.250938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.251922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.251949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.252961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.252990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.253902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.253931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.254902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.254930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.255884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.256008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.418 [2024-12-07 01:03:29.256098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 [2024-12-07 01:03:29.256125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.418 qpair failed and we were unable to recover it. 00:36:13.418 [2024-12-07 01:03:29.256208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.418 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.418 [2024-12-07 01:03:29.256235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.419 [2024-12-07 01:03:29.256342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.419 [2024-12-07 01:03:29.256458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.256565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.256674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.256784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.256910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.256939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.257868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.257973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2394000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.258864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.258892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f238c000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.419 [2024-12-07 01:03:29.259865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.259960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.419 [2024-12-07 01:03:29.262530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.419 [2024-12-07 01:03:29.262663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.419 [2024-12-07 01:03:29.262691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.419 [2024-12-07 01:03:29.262708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.419 [2024-12-07 01:03:29.262720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.419 [2024-12-07 01:03:29.262757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.419 01:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 414842 00:36:13.419 [2024-12-07 01:03:29.272367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.419 [2024-12-07 01:03:29.272454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.419 [2024-12-07 01:03:29.272483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.419 [2024-12-07 01:03:29.272499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.419 [2024-12-07 01:03:29.272513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.419 [2024-12-07 01:03:29.272549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.282401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.419 [2024-12-07 01:03:29.282486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.419 [2024-12-07 01:03:29.282511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.419 [2024-12-07 01:03:29.282526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.419 [2024-12-07 01:03:29.282538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.419 [2024-12-07 01:03:29.282569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.292483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.419 [2024-12-07 01:03:29.292580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.419 [2024-12-07 01:03:29.292606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.419 [2024-12-07 01:03:29.292621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.419 [2024-12-07 01:03:29.292633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.419 [2024-12-07 01:03:29.292665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.419 qpair failed and we were unable to recover it. 00:36:13.419 [2024-12-07 01:03:29.302381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.419 [2024-12-07 01:03:29.302463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.419 [2024-12-07 01:03:29.302488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.419 [2024-12-07 01:03:29.302508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.419 [2024-12-07 01:03:29.302521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.302551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.312350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.312441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.312466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.312481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.312493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.312527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.322371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.322458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.322482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.322497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.322510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.322540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.332403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.332514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.332541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.332556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.332569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.332598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.342509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.342599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.342628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.342643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.342666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.342705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.352542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.352627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.352654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.352671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.352684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.352715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.362547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.362667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.362692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.362707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.362719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.362749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.372551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.372645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.372670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.372684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.372697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.372726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.382544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.382628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.382653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.382667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.382680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.382710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.392663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.392802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.392829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.392844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.392857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.392901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.402612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.402698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.402723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.402737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.402750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.402780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.412619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.412712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.412737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.412751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.412763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.412793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.422681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.422785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.422814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.422830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.422843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.422875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.432694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.432788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.432819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.432837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.432852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.432883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.442788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.442872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.442899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.442913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.442926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.442955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.452763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.452870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.452895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.452910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.452922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.452952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.462774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.462866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.462891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.462906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.462918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.462948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.472843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.472966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.473005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.473024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.473038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.473075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.482809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.482897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.482922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.482937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.482949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.482980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.420 qpair failed and we were unable to recover it. 00:36:13.420 [2024-12-07 01:03:29.492885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.420 [2024-12-07 01:03:29.493007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.420 [2024-12-07 01:03:29.493032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.420 [2024-12-07 01:03:29.493047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.420 [2024-12-07 01:03:29.493059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.420 [2024-12-07 01:03:29.493089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.502961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.503057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.503084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.503098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.503111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.503141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.512907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.512990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.513026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.513041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.513056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.513090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.522961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.523080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.523105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.523119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.523132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.523162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.533019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.533112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.533137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.533152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.533164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.533195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.542979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.543079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.543104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.543119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.543132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.543162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.421 [2024-12-07 01:03:29.553100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.421 [2024-12-07 01:03:29.553191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.421 [2024-12-07 01:03:29.553216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.421 [2024-12-07 01:03:29.553230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.421 [2024-12-07 01:03:29.553242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.421 [2024-12-07 01:03:29.553273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.421 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.563048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.563154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.563184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.563199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.563212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.563243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.573106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.573215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.573242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.573256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.573269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.573301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.583109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.583198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.583227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.583242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.583255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.583285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.593138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.593221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.593246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.593261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.593273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.593314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.603158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.603286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.603311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.603327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.603345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.603376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.613250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.613345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.613370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.613385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.613397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.613427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.623229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.623311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.623337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.623352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.623364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.623394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.633272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.633363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.633388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.633403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.633415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.633445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.643307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.643395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.643420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.643435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.643447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.643477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.653305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.653409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.653434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.653449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.653461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.679 [2024-12-07 01:03:29.653492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.679 qpair failed and we were unable to recover it. 00:36:13.679 [2024-12-07 01:03:29.663321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.679 [2024-12-07 01:03:29.663406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.679 [2024-12-07 01:03:29.663432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.679 [2024-12-07 01:03:29.663447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.679 [2024-12-07 01:03:29.663460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.663490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.673356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.673440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.673466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.673480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.673493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.673523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.683363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.683486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.683511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.683525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.683538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.683568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.693460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.693551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.693581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.693596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.693608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.693638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.703419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.703511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.703536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.703551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.703563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.703593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.713487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.713586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.713613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.713627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.713640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.713670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.723530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.723612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.723637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.723652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.723664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.723693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.733558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.733670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.733695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.733715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.733729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.733761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.743566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.743654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.743679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.743694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.743707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.743737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.753644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.753753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.753778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.753792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.753805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.753836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.763600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.763681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.763706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.763720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.763733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.763764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.773649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.773742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.773767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.773781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.773794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.773824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.783698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.783785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.783810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.783825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.783837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.783866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.793698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.793776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.793800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.793815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.793827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.793857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.803743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.803826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.803852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.803871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.803883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.803914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.813762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.813900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.813927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.813942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.813954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.814018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.680 [2024-12-07 01:03:29.823777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.680 [2024-12-07 01:03:29.823867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.680 [2024-12-07 01:03:29.823892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.680 [2024-12-07 01:03:29.823906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.680 [2024-12-07 01:03:29.823919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.680 [2024-12-07 01:03:29.823948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.680 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-07 01:03:29.833825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.938 [2024-12-07 01:03:29.833920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.938 [2024-12-07 01:03:29.833947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.938 [2024-12-07 01:03:29.833964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.938 [2024-12-07 01:03:29.833977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.938 [2024-12-07 01:03:29.834017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-07 01:03:29.843933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.938 [2024-12-07 01:03:29.844023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.844048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.844062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.844076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.844106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.853872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.853968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.853992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.854017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.854030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.854060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.863945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.864069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.864094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.864114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.864127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.864157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.873981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.874094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.874118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.874132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.874145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.874174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.884050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.884168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.884196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.884210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.884222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.884252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.894006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.894099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.894124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.894137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.894150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.894192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.904049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.904139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.904163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.904178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.904190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.904225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.914043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.914125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.914150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.914165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.914178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.914208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.924111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.924241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.924268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.924283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.924295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.924325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.934223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.934346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.934373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.934388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.934401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.934431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.944169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.944282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.944309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.944324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.944337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.944367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.954150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.954246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.954271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.954285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.954298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.954328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-07 01:03:29.964266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.939 [2024-12-07 01:03:29.964350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.939 [2024-12-07 01:03:29.964376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.939 [2024-12-07 01:03:29.964390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.939 [2024-12-07 01:03:29.964403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.939 [2024-12-07 01:03:29.964433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:29.974263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:29.974355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:29.974380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:29.974394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:29.974406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:29.974436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:29.984257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:29.984353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:29.984378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:29.984392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:29.984405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:29.984434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:29.994351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:29.994439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:29.994469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:29.994484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:29.994497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:29.994527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.004330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.004413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.004438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.004452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.004465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.004495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.014484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.014614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.014651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.014676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.014698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.014743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.024406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.024501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.024534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.024559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.024582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.024626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.034437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.034546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.034584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.034611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.034643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.034705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.044432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.044527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.044555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.044570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.044583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.044615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.054557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.054657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.054683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.054698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.054711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.054741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.064510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.064603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.064629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.064644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.064656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.064687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.074524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.074610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.074636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.074650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.074662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.074695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-07 01:03:30.084564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.940 [2024-12-07 01:03:30.084653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.940 [2024-12-07 01:03:30.084678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.940 [2024-12-07 01:03:30.084693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.940 [2024-12-07 01:03:30.084706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:13.940 [2024-12-07 01:03:30.084736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.940 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.094575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.094665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.094690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.094704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.094717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.094746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.104600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.104688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.104716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.104731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.104744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.104774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.114590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.114686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.114711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.114725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.114738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.114771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.124746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.124871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.124903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.124919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.124931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.124962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.134709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.134803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.134827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.134841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.134854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.134885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.144757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.144855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.144914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.144938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.144954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:14.197 [2024-12-07 01:03:30.145006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.154818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.154936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.154971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.154989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.155013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.197 [2024-12-07 01:03:30.155048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.164763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.164880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.164908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.164923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.164941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.197 [2024-12-07 01:03:30.164971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.174805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.174898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.174923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.174937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.197 [2024-12-07 01:03:30.174950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.197 [2024-12-07 01:03:30.174979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.197 qpair failed and we were unable to recover it. 00:36:14.197 [2024-12-07 01:03:30.184898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.197 [2024-12-07 01:03:30.184992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.197 [2024-12-07 01:03:30.185026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.197 [2024-12-07 01:03:30.185041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.185053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.185083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.194867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.194987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.195023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.195039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.195052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.195081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.204848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.204932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.204957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.204971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.204984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.205021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.215062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.215153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.215178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.215193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.215206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.215237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.225051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.225189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.225220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.225237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.225250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.225280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.234969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.235066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.235092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.235107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.235120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.235149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.245015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.245102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.245127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.245142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.245156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.245185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.255049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.255143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.255173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.255188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.255201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.255230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.265046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.265160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.265187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.265202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.265214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.265244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.275083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.275201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.275228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.275242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.275255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.275284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.285100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.285193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.285218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.285232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.285244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.285273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.295203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.295324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.295349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.295363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.295382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.295411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.305218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.305321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.305348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.305363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.305375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.305404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.315191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.315274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.315299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.315313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.315325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.315355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.325220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.325304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.325328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.325343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.325356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.325385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.335249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.335339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.335364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.335378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.335391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.335420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.198 [2024-12-07 01:03:30.345356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.198 [2024-12-07 01:03:30.345475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.198 [2024-12-07 01:03:30.345502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.198 [2024-12-07 01:03:30.345517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.198 [2024-12-07 01:03:30.345530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.198 [2024-12-07 01:03:30.345559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.198 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.355331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.355416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.355441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.355455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.355469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.355498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.365372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.365458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.365482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.365497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.365509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.365538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.375382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.375473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.375500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.375515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.375527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.375558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.385392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.385490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.385523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.385539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.385552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.385586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.395417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.395507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.395533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.395547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.395560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.395589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.405453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.405545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.405572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.405587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.405599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.405628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.415564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.415699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.415728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.415750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.415763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.415792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.425499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.464 [2024-12-07 01:03:30.425597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.464 [2024-12-07 01:03:30.425624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.464 [2024-12-07 01:03:30.425638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.464 [2024-12-07 01:03:30.425656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.464 [2024-12-07 01:03:30.425687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.464 qpair failed and we were unable to recover it. 00:36:14.464 [2024-12-07 01:03:30.435539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.435637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.435664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.435679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.435691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.435720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.445523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.445615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.445639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.445653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.445666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.445696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.455576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.455702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.455729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.455744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.455757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.455786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.465629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.465718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.465743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.465757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.465770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.465799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.475664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.475768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.475795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.475810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.475822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.475852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.485637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.485726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.485751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.485771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.485784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.485812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.495777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.495904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.495930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.495945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.495957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.496004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.505757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.505868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.505894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.505909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.505921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.505950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.515764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.515889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.515921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.515936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.515948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.515977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.525755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.525838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.525863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.525879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.525892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.525921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.535806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.535942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.535969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.535990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.536013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.536043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.545921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.546026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.546054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.546069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.546082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.546112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.555866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.555955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.555986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.556011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.556030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.556060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.565963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.566072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.566097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.566111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.566123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.566153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.575954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.576114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.576141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.576156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.576168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.576197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.585943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.586060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.586087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.586102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.586113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.586142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.595960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.596058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.596084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.596099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.596112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.596140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.465 [2024-12-07 01:03:30.605975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.465 [2024-12-07 01:03:30.606070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.465 [2024-12-07 01:03:30.606096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.465 [2024-12-07 01:03:30.606110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.465 [2024-12-07 01:03:30.606122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.465 [2024-12-07 01:03:30.606151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.465 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.616061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.616175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.616206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.616231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.616252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.616296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.626090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.626204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.626232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.626247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.626260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.626291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.636082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.636174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.636199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.636213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.636226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.636255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.646094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.646180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.646213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.646228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.646241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.646271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.656149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.656240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.656265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.656279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.656292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.656321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.666293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.666428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.666455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.666470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.666482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.725 [2024-12-07 01:03:30.666511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.725 qpair failed and we were unable to recover it. 00:36:14.725 [2024-12-07 01:03:30.676184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.725 [2024-12-07 01:03:30.676278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.725 [2024-12-07 01:03:30.676303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.725 [2024-12-07 01:03:30.676317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.725 [2024-12-07 01:03:30.676330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.676358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.686204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.686338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.686365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.686380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.686402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.686431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.696370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.696460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.696485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.696499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.696512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.696543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.706268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.706367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.706393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.706408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.706421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.706449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.716315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.716399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.716424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.716438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.716450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.716479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.726351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.726438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.726463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.726477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.726490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.726519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.736389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.736537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.736563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.736578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.736591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.736620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.746480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.746573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.746597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.746611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.746624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.746654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.756437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.756562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.756589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.756604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.756617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.756646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.766475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.766593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.766619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.766635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.766648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.766677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.776459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.776552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.776582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.776596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.776609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.776638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.786527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.786635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.786661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.786676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.786688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.786718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.796555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.796643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.796671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.796686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.796699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.726 [2024-12-07 01:03:30.796728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.726 qpair failed and we were unable to recover it. 00:36:14.726 [2024-12-07 01:03:30.806545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.726 [2024-12-07 01:03:30.806629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.726 [2024-12-07 01:03:30.806655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.726 [2024-12-07 01:03:30.806670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.726 [2024-12-07 01:03:30.806682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.806711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.816658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.816774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.816800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.816814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.816833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.816863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.826620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.826712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.826738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.826753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.826765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.826794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.836698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.836794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.836819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.836833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.836846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.836875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.846752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.846844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.846869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.846883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.846895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.846927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.856766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.856897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.856924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.856939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.856951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.856980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.727 [2024-12-07 01:03:30.866749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.727 [2024-12-07 01:03:30.866848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.727 [2024-12-07 01:03:30.866873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.727 [2024-12-07 01:03:30.866888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.727 [2024-12-07 01:03:30.866901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.727 [2024-12-07 01:03:30.866930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.727 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.876808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.876934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.876963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.876979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.876991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.877031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.886822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.886943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.886970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.886985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.887005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.887036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.896863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.896983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.897017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.897033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.897046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.897076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.906911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.907033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.907064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.907079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.907092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.907121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.916895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.917037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.917069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.917085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.917098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.917128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.926930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.927037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.927064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.927078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.927091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.927121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.936952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.937057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.937083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.937098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.937110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.937139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.946987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.947112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.947138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.947153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.947171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.947201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.957074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.957158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.957183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.957199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.957212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.957242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.967022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.967124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.967148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.967163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.967176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.967205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.977138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.977275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.977314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.977329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.977343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.977372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.987103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.987194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.987219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.987234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.987247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.987276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:30.997130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:30.997217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:30.997242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:30.997257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:30.997270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:30.997299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.007174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.007277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.007302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.007316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.007329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.007358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.017229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.017318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.017343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.017357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.017370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.017398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.027210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.027328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.027354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.027369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.027382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.027412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.037284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.037388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.037419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.037434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.037447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.037476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.047298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.047382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.047407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.047421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.047434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.047463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.057294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.057402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.057426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.057442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.057456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.985 [2024-12-07 01:03:31.057485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.985 qpair failed and we were unable to recover it. 00:36:14.985 [2024-12-07 01:03:31.067358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.985 [2024-12-07 01:03:31.067488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.985 [2024-12-07 01:03:31.067515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.985 [2024-12-07 01:03:31.067531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.985 [2024-12-07 01:03:31.067544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.067573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.077415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.077508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.077533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.077553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.077566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.077595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.087379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.087460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.087485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.087500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.087512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.087541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.097431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.097545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.097569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.097593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.097606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.097635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.107427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.107523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.107548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.107562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.107576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.107605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.117430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.117512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.117537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.117552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.117563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.117593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:14.986 [2024-12-07 01:03:31.127585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.986 [2024-12-07 01:03:31.127720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.986 [2024-12-07 01:03:31.127748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.986 [2024-12-07 01:03:31.127764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.986 [2024-12-07 01:03:31.127777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:14.986 [2024-12-07 01:03:31.127808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.986 qpair failed and we were unable to recover it. 00:36:15.244 [2024-12-07 01:03:31.137565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.244 [2024-12-07 01:03:31.137654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.244 [2024-12-07 01:03:31.137679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.244 [2024-12-07 01:03:31.137694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.244 [2024-12-07 01:03:31.137706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.244 [2024-12-07 01:03:31.137736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.244 qpair failed and we were unable to recover it. 00:36:15.244 [2024-12-07 01:03:31.147578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.147666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.147693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.147709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.147723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.147752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.157537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.157617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.157642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.157656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.157669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.157697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.167563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.167642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.167672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.167687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.167699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.167728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.177675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.177769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.177793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.177807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.177820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.177849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.187621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.187713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.187737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.187751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.187764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.187793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.197699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.197823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.197850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.197866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.197878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.197907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.207701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.207785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.207810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.207830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.207843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.207872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.217724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.217814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.217840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.217854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.217867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.217895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.227785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.227868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.227894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.227909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.227922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.227951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.237802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.237890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.237916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.237930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.237943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.237971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.247812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.247939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.247963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.247978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.247991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.248029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.257865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.257958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.257983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.258005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.258020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.258050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.267964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.268065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.268091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.268106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.268118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.268148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.277909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.278008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.278034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.278049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.278062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.278091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.287919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.288011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.288037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.288052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.288065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.245 [2024-12-07 01:03:31.288094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.245 qpair failed and we were unable to recover it. 00:36:15.245 [2024-12-07 01:03:31.297972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.245 [2024-12-07 01:03:31.298070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.245 [2024-12-07 01:03:31.298101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.245 [2024-12-07 01:03:31.298117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.245 [2024-12-07 01:03:31.298130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.298159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.307993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.308118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.308142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.308157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.308170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.308200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.318007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.318091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.318115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.318130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.318142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.318171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.328132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.328214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.328240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.328255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.328268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.328300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.338079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.338171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.338198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.338222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.338235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.338264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.348101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.348191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.348216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.348231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.348244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.348272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.358124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.358249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.358274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.358288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.358302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.358330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.368176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.368263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.368288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.368303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.368316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.368345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.378284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.378376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.378403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.378419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.378432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.378462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.246 [2024-12-07 01:03:31.388222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.246 [2024-12-07 01:03:31.388308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.246 [2024-12-07 01:03:31.388335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.246 [2024-12-07 01:03:31.388350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.246 [2024-12-07 01:03:31.388363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.246 [2024-12-07 01:03:31.388392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.246 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.398233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.398322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.398347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.398362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.398375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.398404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.408296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.408388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.408417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.408433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.408446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.408476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.418326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.418447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.418473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.418488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.418502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.418531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.428312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.428406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.428432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.428446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.428460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.428489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.438363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.438451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.438476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.438492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.438505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.438534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.448378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.448466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.448491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.448505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.506 [2024-12-07 01:03:31.448519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.506 [2024-12-07 01:03:31.448548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.506 qpair failed and we were unable to recover it. 00:36:15.506 [2024-12-07 01:03:31.458534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.506 [2024-12-07 01:03:31.458626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.506 [2024-12-07 01:03:31.458651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.506 [2024-12-07 01:03:31.458665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.458678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.458708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.468454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.468575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.468600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.468620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.468634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.468663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.478493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.478592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.478617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.478632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.478645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.478675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.488453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.488534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.488560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.488575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.488587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.488615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.498575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.498672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.498697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.498711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.498724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.498753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.508568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.508658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.508683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.508698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.508711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.508740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.518677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.518766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.518791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.518806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.518818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.518847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.528601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.528735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.528759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.528774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.528786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.528815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.538627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.538753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.538778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.538792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.538805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.538834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.548646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.548733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.548758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.548772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.548784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.548813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.558699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.558794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.558819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.558834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.558846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.558875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.568703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.507 [2024-12-07 01:03:31.568787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.507 [2024-12-07 01:03:31.568812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.507 [2024-12-07 01:03:31.568827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.507 [2024-12-07 01:03:31.568839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.507 [2024-12-07 01:03:31.568868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.507 qpair failed and we were unable to recover it. 00:36:15.507 [2024-12-07 01:03:31.578827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.578961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.578987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.579013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.579027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.579057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.588759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.588847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.588871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.588885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.588899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.588928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.598804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.598889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.598914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.598933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.598947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.598977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.608833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.608967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.608999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.609016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.609029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.609059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.618846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.618961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.618986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.619019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.619043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.619084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.628974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.629083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.629110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.629130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.629143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.629174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.638903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.639027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.639053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.639067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.639080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.639110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.508 [2024-12-07 01:03:31.648916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.508 [2024-12-07 01:03:31.649007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.508 [2024-12-07 01:03:31.649040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.508 [2024-12-07 01:03:31.649055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.508 [2024-12-07 01:03:31.649068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.508 [2024-12-07 01:03:31.649097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.508 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.658983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.659083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.659108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.659122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.659135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.659165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.669064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.669155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.669183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.669198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.669211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.669242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.679024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.679111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.679136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.679150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.679163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.679192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.689048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.689137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.689162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.689177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.689190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.689218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.699077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.699168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.699192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.699207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.699221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.699249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.709117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.709203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.709227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.709241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.709254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.769 [2024-12-07 01:03:31.709282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.769 qpair failed and we were unable to recover it. 00:36:15.769 [2024-12-07 01:03:31.719170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.769 [2024-12-07 01:03:31.719281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.769 [2024-12-07 01:03:31.719306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.769 [2024-12-07 01:03:31.719320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.769 [2024-12-07 01:03:31.719332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.719362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.729154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.729239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.729264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.729284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.729297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.729327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.739220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.739311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.739336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.739351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.739363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.739392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.749234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.749328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.749353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.749367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.749380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.749408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.759324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.759406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.759431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.759445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.759458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.759487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.769302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.769390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.769416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.769431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.769444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.769477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.779328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.779449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.779474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.779488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.779502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.779531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.789348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.789438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.789463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.789478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.789491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.789520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.799346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.799440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.799468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.799490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.799504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.799534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.809460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.809537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.809563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.809577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.770 [2024-12-07 01:03:31.809591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.770 [2024-12-07 01:03:31.809620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.770 qpair failed and we were unable to recover it. 00:36:15.770 [2024-12-07 01:03:31.819436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.770 [2024-12-07 01:03:31.819546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.770 [2024-12-07 01:03:31.819571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.770 [2024-12-07 01:03:31.819585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.819597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.819626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.829479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.829569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.829594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.829609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.829623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.829652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.839452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.839555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.839581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.839596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.839609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.839637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.849496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.849579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.849605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.849620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.849633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.849661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.859526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.859617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.859643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.859662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.859676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.859705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.869571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.869659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.869685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.869700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.869713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.869741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.879627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.879710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.879737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.879753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.879765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.879795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.889740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.889827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.889853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.889869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.889881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.889912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.899686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.899778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.899804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.899819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.899832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.899867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:15.771 [2024-12-07 01:03:31.909701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.771 [2024-12-07 01:03:31.909789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.771 [2024-12-07 01:03:31.909814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.771 [2024-12-07 01:03:31.909829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.771 [2024-12-07 01:03:31.909841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:15.771 [2024-12-07 01:03:31.909871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.771 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.919719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.919798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.919823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.919838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.919851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.919880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.929772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.929856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.929881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.929896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.929909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.929938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.939772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.939898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.939923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.939937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.939951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.939980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.949796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.949919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.949944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.949959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.949972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.950008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.959831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.959915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.959940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.959954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.959967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.960003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.969828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.032 [2024-12-07 01:03:31.969947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.032 [2024-12-07 01:03:31.969974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.032 [2024-12-07 01:03:31.969989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.032 [2024-12-07 01:03:31.970016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.032 [2024-12-07 01:03:31.970046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.032 qpair failed and we were unable to recover it. 00:36:16.032 [2024-12-07 01:03:31.979878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:31.979971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:31.980006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:31.980025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:31.980039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:31.980069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:31.989897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:31.989987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:31.990018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:31.990039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:31.990052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:31.990082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:31.999909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.000042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.000069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.000084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.000097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.000126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.009935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.010032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.010058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.010071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.010084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.010113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.020002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.020107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.020131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.020145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.020157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.020187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.030003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.030095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.030120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.030134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.030146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.030181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.040027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.040109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.040134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.040148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.040161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.040190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.050189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.050319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.050346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.050361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.050373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.050402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.060212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.060308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.060333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.060348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.060361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.060390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.070192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.070322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.033 [2024-12-07 01:03:32.070349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.033 [2024-12-07 01:03:32.070364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.033 [2024-12-07 01:03:32.070377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.033 [2024-12-07 01:03:32.070406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.033 qpair failed and we were unable to recover it. 00:36:16.033 [2024-12-07 01:03:32.080194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.033 [2024-12-07 01:03:32.080289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.080313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.080327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.080340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.080369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.090237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.090327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.090354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.090371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.090384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.090413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.100225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.100314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.100339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.100354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.100367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.100395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.110346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.110435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.110460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.110473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.110486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.110515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.120277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.120359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.120384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.120403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.120417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.120452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.130316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.130450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.130478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.130494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.130506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.130536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.140368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.140464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.140489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.140504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.140517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.140546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.150353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.150448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.150473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.150487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.150500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.150529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.160395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.160485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.160510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.160524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.160537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.160571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.034 [2024-12-07 01:03:32.170393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.034 [2024-12-07 01:03:32.170522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.034 [2024-12-07 01:03:32.170548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.034 [2024-12-07 01:03:32.170563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.034 [2024-12-07 01:03:32.170575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.034 [2024-12-07 01:03:32.170604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.034 qpair failed and we were unable to recover it. 00:36:16.294 [2024-12-07 01:03:32.180471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.294 [2024-12-07 01:03:32.180581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.294 [2024-12-07 01:03:32.180606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.294 [2024-12-07 01:03:32.180621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.294 [2024-12-07 01:03:32.180634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.294 [2024-12-07 01:03:32.180663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.294 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.190489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.190579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.190603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.190617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.190630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.190659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.200502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.200587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.200612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.200626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.200638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.200667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.210550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.210638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.210663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.210677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.210692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.210721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.220583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.220672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.220697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.220712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.220725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.220754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.230633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.230739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.230765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.230779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.230792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.230821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.240623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.240716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.240741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.240755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.240768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.240796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.250629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.250709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.250734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.250753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.250767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.250796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.260698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.260787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.260811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.260824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.260837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.260865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.270709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.270795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.270820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.270834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.270847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.270875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.295 [2024-12-07 01:03:32.280725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.295 [2024-12-07 01:03:32.280812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.295 [2024-12-07 01:03:32.280837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.295 [2024-12-07 01:03:32.280851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.295 [2024-12-07 01:03:32.280863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.295 [2024-12-07 01:03:32.280892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.295 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.290844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.290943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.290968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.290982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.291004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.291048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.300782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.300877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.300901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.300915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.300928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.300957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.310817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.310906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.310933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.310949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.310962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.310991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.320823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.320908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.320933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.320947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.320960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.320989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.330855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.330938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.330962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.330976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.330989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.331030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.340891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.341029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.341056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.341070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.341083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.341111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.350951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.351063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.351089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.351104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.351116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.351147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.360936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.361061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.361088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.361103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.361116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.361145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.370966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.371090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.371114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.371129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.371142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.371170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.381055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.381163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.381189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.381210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.296 [2024-12-07 01:03:32.381224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.296 [2024-12-07 01:03:32.381254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.296 qpair failed and we were unable to recover it. 00:36:16.296 [2024-12-07 01:03:32.391041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.296 [2024-12-07 01:03:32.391130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.296 [2024-12-07 01:03:32.391156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.296 [2024-12-07 01:03:32.391171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.391183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.391213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.297 [2024-12-07 01:03:32.401053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.297 [2024-12-07 01:03:32.401131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.297 [2024-12-07 01:03:32.401155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.297 [2024-12-07 01:03:32.401169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.401182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.401211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.297 [2024-12-07 01:03:32.411162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.297 [2024-12-07 01:03:32.411245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.297 [2024-12-07 01:03:32.411270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.297 [2024-12-07 01:03:32.411284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.411298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.411326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.297 [2024-12-07 01:03:32.421164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.297 [2024-12-07 01:03:32.421253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.297 [2024-12-07 01:03:32.421278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.297 [2024-12-07 01:03:32.421291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.421304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.421338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.297 [2024-12-07 01:03:32.431147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.297 [2024-12-07 01:03:32.431237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.297 [2024-12-07 01:03:32.431264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.297 [2024-12-07 01:03:32.431280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.431293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.431322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.297 [2024-12-07 01:03:32.441246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.297 [2024-12-07 01:03:32.441356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.297 [2024-12-07 01:03:32.441383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.297 [2024-12-07 01:03:32.441398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.297 [2024-12-07 01:03:32.441411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.297 [2024-12-07 01:03:32.441440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.297 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.451286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.451373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.451398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.451413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.451426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.451454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.461240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.461331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.461356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.461371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.461383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.461412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.471295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.471422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.471449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.471464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.471477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.471506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.481283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.481415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.481441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.481456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.481469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.481498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.491313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.491399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.491424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.491438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.491451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.491479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.501340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.501430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.501455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.501469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.557 [2024-12-07 01:03:32.501483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.557 [2024-12-07 01:03:32.501511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.557 qpair failed and we were unable to recover it. 00:36:16.557 [2024-12-07 01:03:32.511358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.557 [2024-12-07 01:03:32.511444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.557 [2024-12-07 01:03:32.511470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.557 [2024-12-07 01:03:32.511490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.511503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.511532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.521367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.521461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.521486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.521501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.521513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.521542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.531417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.531498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.531522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.531536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.531549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.531578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.541449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.541554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.541579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.541594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.541607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.541635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.551458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.551544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.551569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.551583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.551596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.551631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.561542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.561628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.561652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.561666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.561679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.561708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.571539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.571620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.571645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.571659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.571673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.571701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.581595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.581688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.581713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.581727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.581739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.581768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.591588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.591677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.591702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.591717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.591730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.591758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.601632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.601725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.601750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.601765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.601778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.601809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.558 qpair failed and we were unable to recover it. 00:36:16.558 [2024-12-07 01:03:32.611658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.558 [2024-12-07 01:03:32.611738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.558 [2024-12-07 01:03:32.611762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.558 [2024-12-07 01:03:32.611775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.558 [2024-12-07 01:03:32.611788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.558 [2024-12-07 01:03:32.611816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.621697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.621802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.621826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.621840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.621852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.621881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.631706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.631798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.631821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.631835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.631848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.631877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.641743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.641827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.641851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.641871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.641884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.641913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.651742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.651832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.651857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.651872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.651884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.651913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.661784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.661872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.661896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.661910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.661923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.661952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.671807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.671897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.671922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.671935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.671948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.671977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.681843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.681927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.681951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.681965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.681979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.682021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.691862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.691946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.691971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.691985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.692006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.692037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.559 [2024-12-07 01:03:32.701938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.559 [2024-12-07 01:03:32.702032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.559 [2024-12-07 01:03:32.702057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.559 [2024-12-07 01:03:32.702071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.559 [2024-12-07 01:03:32.702083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.559 [2024-12-07 01:03:32.702113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.559 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.712055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.712152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.712177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.712191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.712203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.712232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.721990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.722116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.722143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.722157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.722170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.722199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.731975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.732075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.732099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.732113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.732127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.732156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.742024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.742112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.742138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.742152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.742165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.742195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.752035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.752117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.752143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.752158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.752171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.752201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.762147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.762250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.762288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.762302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.762315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.762344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.772117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.772200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.772227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.772247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.772261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.772298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.820 [2024-12-07 01:03:32.782126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.820 [2024-12-07 01:03:32.782248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.820 [2024-12-07 01:03:32.782286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.820 [2024-12-07 01:03:32.782300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.820 [2024-12-07 01:03:32.782313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.820 [2024-12-07 01:03:32.782341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.820 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.792149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.792243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.792268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.792282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.792295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.792323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.802193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.802279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.802304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.802318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.802330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.802358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.812249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.812333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.812357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.812372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.812384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.812419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.822297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.822401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.822426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.822441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.822454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.822482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.832290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.832390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.832415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.832430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.832443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.832472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.842287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.842373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.842398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.842412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.842425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.842453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.852365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.852477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.852504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.852519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.852531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.852559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.862414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.862505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.862530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.862544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.862557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.821 [2024-12-07 01:03:32.862587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.821 qpair failed and we were unable to recover it. 00:36:16.821 [2024-12-07 01:03:32.872384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.821 [2024-12-07 01:03:32.872473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.821 [2024-12-07 01:03:32.872498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.821 [2024-12-07 01:03:32.872512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.821 [2024-12-07 01:03:32.872524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.872553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.882455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.882573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.882600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.882615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.822 [2024-12-07 01:03:32.882627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.882657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.892452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.892536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.892560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.892574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.822 [2024-12-07 01:03:32.892587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.892616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.902532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.902630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.902662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.902677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.822 [2024-12-07 01:03:32.902690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.902721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.912486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.912593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.912618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.912632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.822 [2024-12-07 01:03:32.912644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.912673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.922637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.922728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.922755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.922772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.822 [2024-12-07 01:03:32.922785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.822 [2024-12-07 01:03:32.922814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.822 qpair failed and we were unable to recover it. 00:36:16.822 [2024-12-07 01:03:32.932626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.822 [2024-12-07 01:03:32.932711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.822 [2024-12-07 01:03:32.932736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.822 [2024-12-07 01:03:32.932750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.823 [2024-12-07 01:03:32.932762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.823 [2024-12-07 01:03:32.932791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.823 qpair failed and we were unable to recover it. 00:36:16.823 [2024-12-07 01:03:32.942668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.823 [2024-12-07 01:03:32.942807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.823 [2024-12-07 01:03:32.942833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.823 [2024-12-07 01:03:32.942849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.823 [2024-12-07 01:03:32.942861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.823 [2024-12-07 01:03:32.942895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.823 qpair failed and we were unable to recover it. 00:36:16.823 [2024-12-07 01:03:32.952630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.823 [2024-12-07 01:03:32.952728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.823 [2024-12-07 01:03:32.952754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.823 [2024-12-07 01:03:32.952769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.823 [2024-12-07 01:03:32.952781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.823 [2024-12-07 01:03:32.952810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.823 qpair failed and we were unable to recover it. 00:36:16.823 [2024-12-07 01:03:32.962659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.823 [2024-12-07 01:03:32.962752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.823 [2024-12-07 01:03:32.962778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.823 [2024-12-07 01:03:32.962792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.823 [2024-12-07 01:03:32.962805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:16.823 [2024-12-07 01:03:32.962835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.823 qpair failed and we were unable to recover it. 00:36:17.083 [2024-12-07 01:03:32.972648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.083 [2024-12-07 01:03:32.972737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.083 [2024-12-07 01:03:32.972761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.083 [2024-12-07 01:03:32.972776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.083 [2024-12-07 01:03:32.972789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.083 [2024-12-07 01:03:32.972818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.083 qpair failed and we were unable to recover it. 00:36:17.083 [2024-12-07 01:03:32.982710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.083 [2024-12-07 01:03:32.982809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.083 [2024-12-07 01:03:32.982836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.083 [2024-12-07 01:03:32.982851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:32.982863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:32.982892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:32.992719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:32.992850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:32.992876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:32.992891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:32.992903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:32.992932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.002737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.002824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.002849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.002863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.002875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.002904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.012775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.012866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.012891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.012905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.012917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.012946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.022837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.022959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.022985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.023009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.023023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.023052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.032864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.032953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.032983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.033006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.033021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.033051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.042871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.042962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.042987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.043008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.043022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.043052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.052875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.052966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.052991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.053016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.053029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.053059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.063049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.063143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.063167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.063182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.063194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.063225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.072947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.073055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.073081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.073096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.073109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.073144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.082977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.083075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.083101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.083115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.083127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.083157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.093008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.093094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.093119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.093133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.093146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.093175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.103047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.103139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.084 [2024-12-07 01:03:33.103163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.084 [2024-12-07 01:03:33.103177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.084 [2024-12-07 01:03:33.103190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.084 [2024-12-07 01:03:33.103219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.084 qpair failed and we were unable to recover it. 00:36:17.084 [2024-12-07 01:03:33.113072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.084 [2024-12-07 01:03:33.113171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.113195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.113209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.113222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:17.085 [2024-12-07 01:03:33.113251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.123151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.123269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.123303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.123329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.123352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.123398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.133133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.133221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.133249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.133264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.133276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.133307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.143213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.143324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.143352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.143367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.143379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.143410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.153176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.153309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.153336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.153350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.153363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.153393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.163220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.163308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.163338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.163353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.163367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.163397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.173334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.173432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.173458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.173471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.173484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.173514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.183316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.183407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.183433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.183447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.183461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.183491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.193294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.193380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.193405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.193420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.193432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.193462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.203413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.203490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.203516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.203530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.203548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.203579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.213343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.213436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.213462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.213476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.213488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.213519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.085 [2024-12-07 01:03:33.223390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.085 [2024-12-07 01:03:33.223478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.085 [2024-12-07 01:03:33.223503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.085 [2024-12-07 01:03:33.223518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.085 [2024-12-07 01:03:33.223531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.085 [2024-12-07 01:03:33.223560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.085 qpair failed and we were unable to recover it. 00:36:17.345 [2024-12-07 01:03:33.233417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.345 [2024-12-07 01:03:33.233505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.345 [2024-12-07 01:03:33.233530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.345 [2024-12-07 01:03:33.233544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.345 [2024-12-07 01:03:33.233557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.345 [2024-12-07 01:03:33.233586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.345 qpair failed and we were unable to recover it. 00:36:17.345 [2024-12-07 01:03:33.243561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.345 [2024-12-07 01:03:33.243654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.345 [2024-12-07 01:03:33.243680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.345 [2024-12-07 01:03:33.243694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.345 [2024-12-07 01:03:33.243707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.345 [2024-12-07 01:03:33.243737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.345 qpair failed and we were unable to recover it. 00:36:17.345 [2024-12-07 01:03:33.253551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.345 [2024-12-07 01:03:33.253637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.345 [2024-12-07 01:03:33.253662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.345 [2024-12-07 01:03:33.253676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.253689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.253718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.263547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.263637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.263666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.263681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.263693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.263726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.273525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.273621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.273647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.273661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.273674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.273715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.283574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.283662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.283687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.283701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.283714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.283744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.293567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.293657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.293691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.293706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.293719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.293749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.303603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.303699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.303724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.303739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.303752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.303782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.313672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.313762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.313786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.313801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.313814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.313844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.323667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.323766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.323791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.323806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.323819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.323849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.333687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.333816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.333841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.333856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.333875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.333906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.343767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.343857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.343881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.343895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.343908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.343939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.353754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.353872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.353897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.353911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.353924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.353955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.363829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.363924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.363949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.363964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.363976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.364014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.373819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.373913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.373939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.373953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.346 [2024-12-07 01:03:33.373966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.346 [2024-12-07 01:03:33.374003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.346 qpair failed and we were unable to recover it. 00:36:17.346 [2024-12-07 01:03:33.383877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.346 [2024-12-07 01:03:33.383978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.346 [2024-12-07 01:03:33.384015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.346 [2024-12-07 01:03:33.384031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.384044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.384077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.393876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.393987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.394021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.394037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.394050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.394080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.403911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.404003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.404030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.404044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.404057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.404088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.413915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.414027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.414053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.414067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.414080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.414110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.423947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.424069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.424099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.424115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.424127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.424157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.433952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.434047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.434072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.434086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.434099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.434129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.443983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.444082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.444107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.444122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.444134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.444165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.454116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.454200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.454226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.454240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.454253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.454285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.464083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.464201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.464226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.464246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.464260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.464290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.474087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.474182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.474206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.474221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.474234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.474264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.347 [2024-12-07 01:03:33.484124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.347 [2024-12-07 01:03:33.484209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.347 [2024-12-07 01:03:33.484233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.347 [2024-12-07 01:03:33.484248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.347 [2024-12-07 01:03:33.484261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.347 [2024-12-07 01:03:33.484291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.347 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.494146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.494260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.494285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.494300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.494313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.494343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.504191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.504314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.504339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.504353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.504367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.504403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.514185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.514272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.514298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.514313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.514326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.514356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.524235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.524323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.524348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.524363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.524376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.524407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.534256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.534338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.534365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.534380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.534393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.534425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.544315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.544445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.544471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.544486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.544499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.544529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.554363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.554480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.554505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.554520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.554532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.554563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.564352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.564482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.564511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.564526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.564539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.564569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.574387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.574468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.574494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.574509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.574522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.574552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.584460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.584552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.584578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.584593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.584605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.584635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.594431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.594513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.594539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.594560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.594574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.594604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.607 [2024-12-07 01:03:33.604488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.607 [2024-12-07 01:03:33.604578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.607 [2024-12-07 01:03:33.604604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.607 [2024-12-07 01:03:33.604619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.607 [2024-12-07 01:03:33.604632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.607 [2024-12-07 01:03:33.604662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.607 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.614556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.614677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.614707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.614723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.614736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.614768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.624597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.624689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.624715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.624731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.624744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.624774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.634541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.634639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.634666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.634682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.634696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.634733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.644584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.644669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.644695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.644711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.644724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.644754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.654658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.654747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.654773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.654788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.654801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.654831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.664641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.664732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.664758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.664772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.664785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.664815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.674668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.674786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.674811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.674825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.674838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.674868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.684707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.684791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.684817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.684832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.684844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.684874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.694727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.694810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.694835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.694850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.694863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.694892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.704849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.704981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.705017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.705034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.705047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.705078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.714778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.714877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.714902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.714917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.714930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.714960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.724790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.724920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.724950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.724966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.724979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.725018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.608 [2024-12-07 01:03:33.734858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.608 [2024-12-07 01:03:33.734942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.608 [2024-12-07 01:03:33.734971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.608 [2024-12-07 01:03:33.734987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.608 [2024-12-07 01:03:33.735011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.608 [2024-12-07 01:03:33.735045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.608 qpair failed and we were unable to recover it. 00:36:17.609 [2024-12-07 01:03:33.744892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.609 [2024-12-07 01:03:33.745019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.609 [2024-12-07 01:03:33.745044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.609 [2024-12-07 01:03:33.745059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.609 [2024-12-07 01:03:33.745072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.609 [2024-12-07 01:03:33.745103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.609 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.754966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.755086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.755112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.755126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.755139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.755169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.764922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.765028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.765054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.765069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.765087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.765119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.775003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.775089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.775114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.775128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.775141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.775172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.785023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.785132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.785158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.785172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.785185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.785216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.795114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.795251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.795276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.795291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.795304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.795334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.805044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.805130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.805156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.805171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.805183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.805213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.815099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.815222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.815248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.815263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.815276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.815306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.825120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.825210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.825236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.825250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.825263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.825293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.835112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.835195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.835219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.868 [2024-12-07 01:03:33.835234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.868 [2024-12-07 01:03:33.835246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.868 [2024-12-07 01:03:33.835277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.868 qpair failed and we were unable to recover it. 00:36:17.868 [2024-12-07 01:03:33.845245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.868 [2024-12-07 01:03:33.845341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.868 [2024-12-07 01:03:33.845366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.845380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.845394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.845424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.855194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.855276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.855307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.855323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.855335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.855366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.865230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.865322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.865347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.865362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.865377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.865408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.875301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.875385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.875410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.875424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.875436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.875466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.885286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.885382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.885409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.885424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.885437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.885470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.895359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.895452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.895478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.895493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.895512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.895544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.905321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.905414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.905439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.905454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.905467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.905497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.915325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.915462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.915491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.915508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.915521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.915552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.925428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.925533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.925560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.925575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.925588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.925618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.935399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.935485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.935514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.935530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.935543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.935573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.945468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.945596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.945623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.945642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.945655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.945685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.955544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.955628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.955655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.955673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.955686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.955716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.965506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.965593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.965618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.869 [2024-12-07 01:03:33.965632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.869 [2024-12-07 01:03:33.965644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.869 [2024-12-07 01:03:33.965675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.869 qpair failed and we were unable to recover it. 00:36:17.869 [2024-12-07 01:03:33.975531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.869 [2024-12-07 01:03:33.975616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.869 [2024-12-07 01:03:33.975641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.870 [2024-12-07 01:03:33.975656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.870 [2024-12-07 01:03:33.975669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.870 [2024-12-07 01:03:33.975699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.870 qpair failed and we were unable to recover it. 00:36:17.870 [2024-12-07 01:03:33.985535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.870 [2024-12-07 01:03:33.985627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.870 [2024-12-07 01:03:33.985659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.870 [2024-12-07 01:03:33.985674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.870 [2024-12-07 01:03:33.985687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.870 [2024-12-07 01:03:33.985717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.870 qpair failed and we were unable to recover it. 00:36:17.870 [2024-12-07 01:03:33.995629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.870 [2024-12-07 01:03:33.995717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.870 [2024-12-07 01:03:33.995745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.870 [2024-12-07 01:03:33.995759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.870 [2024-12-07 01:03:33.995772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.870 [2024-12-07 01:03:33.995809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.870 qpair failed and we were unable to recover it. 00:36:17.870 [2024-12-07 01:03:34.005614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.870 [2024-12-07 01:03:34.005703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.870 [2024-12-07 01:03:34.005732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.870 [2024-12-07 01:03:34.005747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.870 [2024-12-07 01:03:34.005760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.870 [2024-12-07 01:03:34.005792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.870 qpair failed and we were unable to recover it. 00:36:17.870 [2024-12-07 01:03:34.015642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.870 [2024-12-07 01:03:34.015772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.870 [2024-12-07 01:03:34.015798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.870 [2024-12-07 01:03:34.015813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.870 [2024-12-07 01:03:34.015826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:17.870 [2024-12-07 01:03:34.015857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.870 qpair failed and we were unable to recover it. 00:36:18.129 [2024-12-07 01:03:34.025792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.129 [2024-12-07 01:03:34.025890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.129 [2024-12-07 01:03:34.025916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.129 [2024-12-07 01:03:34.025937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.129 [2024-12-07 01:03:34.025950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.129 [2024-12-07 01:03:34.025980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.129 qpair failed and we were unable to recover it. 00:36:18.129 [2024-12-07 01:03:34.035690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.129 [2024-12-07 01:03:34.035776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.129 [2024-12-07 01:03:34.035801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.129 [2024-12-07 01:03:34.035816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.129 [2024-12-07 01:03:34.035829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.129 [2024-12-07 01:03:34.035860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.129 qpair failed and we were unable to recover it. 00:36:18.129 [2024-12-07 01:03:34.045741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.129 [2024-12-07 01:03:34.045835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.129 [2024-12-07 01:03:34.045860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.129 [2024-12-07 01:03:34.045874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.129 [2024-12-07 01:03:34.045887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.129 [2024-12-07 01:03:34.045917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.129 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.055745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.055833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.055861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.055878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.055891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.055921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.065778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.065896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.065921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.065936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.065949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.065987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.075827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.075917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.075945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.075962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.075975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.076014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.085826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.085911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.085937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.085951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.085964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.086003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.095938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.096037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.096063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.096078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.096091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.096121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.105914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.106019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.106044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.106059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.106072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.106102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.116033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.116123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.116148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.116163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.116176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.116206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.126025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.126131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.126157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.126172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.126185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.126216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.135977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.136066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.136094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.136109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.136122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.136166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.146059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.146150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.146176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.146191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.146203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.146233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.156026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.156109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.156135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.156155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.156168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.156199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.166079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.166161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.166186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.166201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.166213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.166244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.130 [2024-12-07 01:03:34.176107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.130 [2024-12-07 01:03:34.176201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.130 [2024-12-07 01:03:34.176225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.130 [2024-12-07 01:03:34.176240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.130 [2024-12-07 01:03:34.176252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.130 [2024-12-07 01:03:34.176282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.130 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.186166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.186285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.186312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.186327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.186340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.186369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.196264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.196351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.196375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.196390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.196402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.196438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.206215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.206298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.206323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.206337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.206350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.206382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.216248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.216356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.216382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.216398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.216411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.216440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.226258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.226356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.226381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.226396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.226409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.226439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.236318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.236425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.236451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.236465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.236478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.236508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.246308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.246397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.246422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.246437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.246450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.246479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.256331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.256410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.256435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.256449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.256461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.256494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.266453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.266543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.266568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.266582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.266594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.266624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.131 [2024-12-07 01:03:34.276429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.131 [2024-12-07 01:03:34.276519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.131 [2024-12-07 01:03:34.276544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.131 [2024-12-07 01:03:34.276558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.131 [2024-12-07 01:03:34.276571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.131 [2024-12-07 01:03:34.276601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.131 qpair failed and we were unable to recover it. 00:36:18.390 [2024-12-07 01:03:34.286448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.390 [2024-12-07 01:03:34.286529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.390 [2024-12-07 01:03:34.286562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.390 [2024-12-07 01:03:34.286580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.390 [2024-12-07 01:03:34.286593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.390 [2024-12-07 01:03:34.286625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.296464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.296548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.296575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.296593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.296605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.296636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.306484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.306574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.306599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.306614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.306627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.306657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.316527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.316610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.316634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.316649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.316661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.316691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.326570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.326663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.326688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.326702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.326720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.326751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.336674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.336763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.336787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.336802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.336814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.336844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.346584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.346671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.346695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.346709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.346721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.346751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.356593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.356678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.356703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.356717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.356730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.356759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.366657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.366751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.366780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.366796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.366809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.366839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.376650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.376733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.376758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.376772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.376784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.376821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.386702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.386793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.386820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.386834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.386847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.386878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.396700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.396781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.396806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.396820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.396834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.396863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.406824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.406910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.406935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.406950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.406962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.391 [2024-12-07 01:03:34.406992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.391 qpair failed and we were unable to recover it. 00:36:18.391 [2024-12-07 01:03:34.416808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.391 [2024-12-07 01:03:34.416905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.391 [2024-12-07 01:03:34.416937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.391 [2024-12-07 01:03:34.416953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.391 [2024-12-07 01:03:34.416965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.417002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.426842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.426958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.426984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.427011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.427025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.427056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.436874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.436963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.436988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.437009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.437023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.437053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.446863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.446948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.446976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.446991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.447013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.447045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.456920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.457013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.457039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.457054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.457072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.457103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.466988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.467132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.467159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.467174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.467187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.467217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.477035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.477118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.477143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.477157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.477170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.477201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.487013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.487099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.487123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.487137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.487150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.487180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.497039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.497134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.497159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.497173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.497185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.497216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.507116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.507213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.507237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.507252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.507264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.507295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.517047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.517139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.517163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.517178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.517190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.517221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.527078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.527191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.527218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.527233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.527246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.527276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.392 [2024-12-07 01:03:34.537153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.392 [2024-12-07 01:03:34.537247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.392 [2024-12-07 01:03:34.537271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.392 [2024-12-07 01:03:34.537285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.392 [2024-12-07 01:03:34.537298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.392 [2024-12-07 01:03:34.537327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.392 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.547151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.547245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.547269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.547284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.547297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.547326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.557196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.557310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.557337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.557351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.557364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.557394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.567262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.567371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.567399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.567414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.567426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.567456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.577315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.577396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.577420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.577434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.577447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.577478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.587327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.587435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.587462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.587483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.587496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.587526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.597328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.597455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.597480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.597495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.597508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.597538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.607337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.607438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.607463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.607478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.607490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.607521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.617366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.617472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.617499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.617515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.617527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.617557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.627392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.627487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.627515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.627530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.627542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.627578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.637420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.637545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.637573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.637588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.637601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.637632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.652 qpair failed and we were unable to recover it. 00:36:18.652 [2024-12-07 01:03:34.647447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.652 [2024-12-07 01:03:34.647531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.652 [2024-12-07 01:03:34.647558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.652 [2024-12-07 01:03:34.647572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.652 [2024-12-07 01:03:34.647584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.652 [2024-12-07 01:03:34.647615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.657472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.657549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.657574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.657589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.657601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.657631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.667521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.667616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.667641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.667655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.667667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.667698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.677645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.677786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.677814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.677828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.677841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.677871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.687556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.687678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.687706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.687721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.687734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.687764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.697566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.697650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.697676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.697690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.697703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.697734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.707664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.707752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.707777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.707791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.707804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.707834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.717609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.717692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.717717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.717737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.717751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.717782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.727752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.727836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.727862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.727877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.727889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.727919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.737688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.737786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.737814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.737830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.737843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.737876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.747717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.747810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.747835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.747849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.747863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.747893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.757751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.757834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.757858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.757872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.757885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.757923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.767766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.767866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.767893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.767908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.767920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.767951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.653 qpair failed and we were unable to recover it. 00:36:18.653 [2024-12-07 01:03:34.777829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.653 [2024-12-07 01:03:34.777950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.653 [2024-12-07 01:03:34.777978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.653 [2024-12-07 01:03:34.777993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.653 [2024-12-07 01:03:34.778024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.653 [2024-12-07 01:03:34.778060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.654 qpair failed and we were unable to recover it. 00:36:18.654 [2024-12-07 01:03:34.787828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.654 [2024-12-07 01:03:34.787916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.654 [2024-12-07 01:03:34.787941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.654 [2024-12-07 01:03:34.787956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.654 [2024-12-07 01:03:34.787969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.654 [2024-12-07 01:03:34.788009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.654 qpair failed and we were unable to recover it. 00:36:18.654 [2024-12-07 01:03:34.797853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.654 [2024-12-07 01:03:34.797952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.654 [2024-12-07 01:03:34.797978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.654 [2024-12-07 01:03:34.798000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.654 [2024-12-07 01:03:34.798015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.654 [2024-12-07 01:03:34.798046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.654 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.807885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.808022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.808050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.808066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.808079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.808109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.817918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.818013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.818039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.818054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.818067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.818097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.827953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.828057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.828084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.828099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.828112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.828142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.837984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.838119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.838147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.838162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.838174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.838204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.848045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.848129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.848160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.848175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.848195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.848228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.858053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.858141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.858166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.858180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.858193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.858223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.868175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.868266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.868291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.868305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.868318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.868348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.878102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.878192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.878217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.878235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.878257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.878303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.888136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.888222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.888249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.888264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.888282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.888315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.898159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.898258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.898283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.898297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.898310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.898340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.908199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.908290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.908314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.913 [2024-12-07 01:03:34.908328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.913 [2024-12-07 01:03:34.908341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.913 [2024-12-07 01:03:34.908371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.913 qpair failed and we were unable to recover it. 00:36:18.913 [2024-12-07 01:03:34.918244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.913 [2024-12-07 01:03:34.918335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.913 [2024-12-07 01:03:34.918359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.918374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.918386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.918416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.928252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.928338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.928363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.928378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.928390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.928420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.938281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.938375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.938399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.938413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.938426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.938455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.948427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.948519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.948544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.948558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.948571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.948600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.958351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.958436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.958460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.958475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.958488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.958517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.968401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.968519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.968547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.968562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.968574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.968617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.978373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.978450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.978481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.978496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.978509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.978539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.988479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.988575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.988601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.988615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.988628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.988671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:34.998442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:34.998527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:34.998552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:34.998566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:34.998579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:34.998609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:35.008474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:35.008557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:35.008584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:35.008598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:35.008611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:35.008641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:35.018578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:35.018664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:35.018693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:35.018708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:35.018726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:35.018758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:35.028603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:35.028704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:35.028729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:35.028744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:35.028758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:35.028787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:35.038557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:35.038643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:35.038668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.914 [2024-12-07 01:03:35.038683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.914 [2024-12-07 01:03:35.038696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.914 [2024-12-07 01:03:35.038726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.914 qpair failed and we were unable to recover it. 00:36:18.914 [2024-12-07 01:03:35.048616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.914 [2024-12-07 01:03:35.048705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.914 [2024-12-07 01:03:35.048733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.915 [2024-12-07 01:03:35.048748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.915 [2024-12-07 01:03:35.048761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.915 [2024-12-07 01:03:35.048791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.915 qpair failed and we were unable to recover it. 00:36:18.915 [2024-12-07 01:03:35.058646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.915 [2024-12-07 01:03:35.058771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.915 [2024-12-07 01:03:35.058797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.915 [2024-12-07 01:03:35.058812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.915 [2024-12-07 01:03:35.058825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:18.915 [2024-12-07 01:03:35.058855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:18.915 qpair failed and we were unable to recover it. 00:36:19.190 [2024-12-07 01:03:35.068645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.190 [2024-12-07 01:03:35.068736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.190 [2024-12-07 01:03:35.068761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.190 [2024-12-07 01:03:35.068776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.190 [2024-12-07 01:03:35.068788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.190 [2024-12-07 01:03:35.068818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.190 qpair failed and we were unable to recover it. 00:36:19.190 [2024-12-07 01:03:35.078664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.078769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.078796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.078811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.078823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.078853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.088687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.088776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.088801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.088816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.088828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.088858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.098720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.098855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.098882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.098898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.098911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.098941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.108826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.108966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.109001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.109018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.109032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.109063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.118803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.118889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.118915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.118930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.118943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.118973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.128831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.128957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.128987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.129012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.129027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.129057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.138868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.139017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.139059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.139077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.139091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.139128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.191 [2024-12-07 01:03:35.148967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.191 [2024-12-07 01:03:35.149081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.191 [2024-12-07 01:03:35.149108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.191 [2024-12-07 01:03:35.149128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.191 [2024-12-07 01:03:35.149141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.191 [2024-12-07 01:03:35.149172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.191 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.158902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.158992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.159031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.159048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.159061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.192 [2024-12-07 01:03:35.159092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.168934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.169038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.169063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.169077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.169090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.192 [2024-12-07 01:03:35.169120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.178946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.179046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.179071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.179085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.179098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.192 [2024-12-07 01:03:35.179129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.188991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.189096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.189121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.189135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.189148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.192 [2024-12-07 01:03:35.189184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.199017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.199110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.199135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.199149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.199168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2394000b90 00:36:19.192 [2024-12-07 01:03:35.199198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.209053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.209145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.209178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.209195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.209208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f238c000b90 00:36:19.192 [2024-12-07 01:03:35.209240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.219186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.219280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.219308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.219322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.219335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f238c000b90 00:36:19.192 [2024-12-07 01:03:35.219366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.229130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.229224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.229257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.229272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.229285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:19.192 [2024-12-07 01:03:35.229316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.239152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.239254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.239282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.239297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.239310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1530730 00:36:19.192 [2024-12-07 01:03:35.239340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.239473] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:19.192 A controller has encountered a failure and is being reset. 00:36:19.192 [2024-12-07 01:03:35.249155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.249285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.249321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.249338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.249351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2388000b90 00:36:19.192 [2024-12-07 01:03:35.249383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.192 [2024-12-07 01:03:35.259187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.192 [2024-12-07 01:03:35.259312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.192 [2024-12-07 01:03:35.259340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.192 [2024-12-07 01:03:35.259355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.192 [2024-12-07 01:03:35.259367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2388000b90 00:36:19.192 [2024-12-07 01:03:35.259398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.192 qpair failed and we were unable to recover it. 00:36:19.460 Controller properly reset. 00:36:19.460 Initializing NVMe Controllers 00:36:19.460 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:19.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:19.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:19.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:19.460 Initialization complete. Launching workers. 00:36:19.460 Starting thread on core 1 00:36:19.460 Starting thread on core 2 00:36:19.460 Starting thread on core 3 00:36:19.460 Starting thread on core 0 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:19.460 00:36:19.460 real 0m11.016s 00:36:19.460 user 0m19.108s 00:36:19.460 sys 0m5.207s 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.460 ************************************ 00:36:19.460 END TEST nvmf_target_disconnect_tc2 00:36:19.460 ************************************ 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.460 rmmod nvme_tcp 00:36:19.460 rmmod nvme_fabrics 00:36:19.460 rmmod nvme_keyring 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 415300 ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 415300 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 415300 ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 415300 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415300 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415300' 00:36:19.460 killing process with pid 415300 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 415300 00:36:19.460 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 415300 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.717 01:03:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.258 00:36:22.258 real 0m16.103s 00:36:22.258 user 0m46.653s 00:36:22.258 sys 0m7.400s 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:22.258 ************************************ 00:36:22.258 END TEST nvmf_target_disconnect 00:36:22.258 ************************************ 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:22.258 00:36:22.258 real 6m42.958s 00:36:22.258 user 17m14.953s 00:36:22.258 sys 1m25.773s 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.258 01:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.258 ************************************ 00:36:22.258 END TEST nvmf_host 00:36:22.258 ************************************ 00:36:22.258 01:03:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:22.258 01:03:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:22.258 01:03:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:22.258 01:03:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.258 01:03:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.258 01:03:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:22.258 ************************************ 00:36:22.258 START TEST nvmf_target_core_interrupt_mode 00:36:22.258 ************************************ 00:36:22.258 01:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:22.258 * Looking for test storage... 00:36:22.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:22.258 01:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:22.258 01:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:22.258 01:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:22.258 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:22.258 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.258 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.258 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.259 --rc genhtml_branch_coverage=1 00:36:22.259 --rc genhtml_function_coverage=1 00:36:22.259 --rc genhtml_legend=1 00:36:22.259 --rc geninfo_all_blocks=1 00:36:22.259 --rc geninfo_unexecuted_blocks=1 00:36:22.259 00:36:22.259 ' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.259 --rc genhtml_branch_coverage=1 00:36:22.259 --rc genhtml_function_coverage=1 00:36:22.259 --rc genhtml_legend=1 00:36:22.259 --rc geninfo_all_blocks=1 00:36:22.259 --rc geninfo_unexecuted_blocks=1 00:36:22.259 00:36:22.259 ' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.259 --rc genhtml_branch_coverage=1 00:36:22.259 --rc genhtml_function_coverage=1 00:36:22.259 --rc genhtml_legend=1 00:36:22.259 --rc geninfo_all_blocks=1 00:36:22.259 --rc geninfo_unexecuted_blocks=1 00:36:22.259 00:36:22.259 ' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:22.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.259 --rc genhtml_branch_coverage=1 00:36:22.259 --rc genhtml_function_coverage=1 00:36:22.259 --rc genhtml_legend=1 00:36:22.259 --rc geninfo_all_blocks=1 00:36:22.259 --rc geninfo_unexecuted_blocks=1 00:36:22.259 00:36:22.259 ' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.259 ************************************ 00:36:22.259 START TEST nvmf_abort 00:36:22.259 ************************************ 00:36:22.259 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:22.259 * Looking for test storage... 00:36:22.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:22.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.260 --rc genhtml_branch_coverage=1 00:36:22.260 --rc genhtml_function_coverage=1 00:36:22.260 --rc genhtml_legend=1 00:36:22.260 --rc geninfo_all_blocks=1 00:36:22.260 --rc geninfo_unexecuted_blocks=1 00:36:22.260 00:36:22.260 ' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:22.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.260 --rc genhtml_branch_coverage=1 00:36:22.260 --rc genhtml_function_coverage=1 00:36:22.260 --rc genhtml_legend=1 00:36:22.260 --rc geninfo_all_blocks=1 00:36:22.260 --rc geninfo_unexecuted_blocks=1 00:36:22.260 00:36:22.260 ' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:22.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.260 --rc genhtml_branch_coverage=1 00:36:22.260 --rc genhtml_function_coverage=1 00:36:22.260 --rc genhtml_legend=1 00:36:22.260 --rc geninfo_all_blocks=1 00:36:22.260 --rc geninfo_unexecuted_blocks=1 00:36:22.260 00:36:22.260 ' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:22.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.260 --rc genhtml_branch_coverage=1 00:36:22.260 --rc genhtml_function_coverage=1 00:36:22.260 --rc genhtml_legend=1 00:36:22.260 --rc geninfo_all_blocks=1 00:36:22.260 --rc geninfo_unexecuted_blocks=1 00:36:22.260 00:36:22.260 ' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.260 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.261 01:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.790 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.790 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.790 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:24.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:24.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:24.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:24.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:36:24.791 00:36:24.791 --- 10.0.0.2 ping statistics --- 00:36:24.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.791 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:36:24.791 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:36:24.792 00:36:24.792 --- 10.0.0.1 ping statistics --- 00:36:24.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.792 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=418171 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 418171 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 418171 ']' 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.792 01:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.792 [2024-12-07 01:03:40.675770] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:24.792 [2024-12-07 01:03:40.676958] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:36:24.792 [2024-12-07 01:03:40.677037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.792 [2024-12-07 01:03:40.769667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:24.792 [2024-12-07 01:03:40.824879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.792 [2024-12-07 01:03:40.824936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.792 [2024-12-07 01:03:40.824961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.792 [2024-12-07 01:03:40.824981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.792 [2024-12-07 01:03:40.825021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.792 [2024-12-07 01:03:40.826817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.792 [2024-12-07 01:03:40.826888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.792 [2024-12-07 01:03:40.826880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:24.792 [2024-12-07 01:03:40.929455] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:24.792 [2024-12-07 01:03:40.929691] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:24.792 [2024-12-07 01:03:40.929746] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:24.792 [2024-12-07 01:03:40.930035] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 [2024-12-07 01:03:41.035684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 Malloc0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 Delay0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 [2024-12-07 01:03:41.107881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.050 01:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:25.309 [2024-12-07 01:03:41.260110] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:27.207 Initializing NVMe Controllers 00:36:27.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:27.207 controller IO queue size 128 less than required 00:36:27.207 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:27.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:27.207 Initialization complete. Launching workers. 00:36:27.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28483 00:36:27.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28544, failed to submit 66 00:36:27.207 success 28483, unsuccessful 61, failed 0 00:36:27.207 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.207 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.207 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.207 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.207 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.208 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.208 rmmod nvme_tcp 00:36:27.465 rmmod nvme_fabrics 00:36:27.465 rmmod nvme_keyring 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 418171 ']' 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 418171 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 418171 ']' 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 418171 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418171 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418171' 00:36:27.465 killing process with pid 418171 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 418171 00:36:27.465 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 418171 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.724 01:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:29.665 00:36:29.665 real 0m7.561s 00:36:29.665 user 0m9.468s 00:36:29.665 sys 0m3.104s 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.665 ************************************ 00:36:29.665 END TEST nvmf_abort 00:36:29.665 ************************************ 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:29.665 ************************************ 00:36:29.665 START TEST nvmf_ns_hotplug_stress 00:36:29.665 ************************************ 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:29.665 * Looking for test storage... 00:36:29.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:29.665 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:29.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.924 --rc genhtml_branch_coverage=1 00:36:29.924 --rc genhtml_function_coverage=1 00:36:29.924 --rc genhtml_legend=1 00:36:29.924 --rc geninfo_all_blocks=1 00:36:29.924 --rc geninfo_unexecuted_blocks=1 00:36:29.924 00:36:29.924 ' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:29.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.924 --rc genhtml_branch_coverage=1 00:36:29.924 --rc genhtml_function_coverage=1 00:36:29.924 --rc genhtml_legend=1 00:36:29.924 --rc geninfo_all_blocks=1 00:36:29.924 --rc geninfo_unexecuted_blocks=1 00:36:29.924 00:36:29.924 ' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:29.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.924 --rc genhtml_branch_coverage=1 00:36:29.924 --rc genhtml_function_coverage=1 00:36:29.924 --rc genhtml_legend=1 00:36:29.924 --rc geninfo_all_blocks=1 00:36:29.924 --rc geninfo_unexecuted_blocks=1 00:36:29.924 00:36:29.924 ' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:29.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.924 --rc genhtml_branch_coverage=1 00:36:29.924 --rc genhtml_function_coverage=1 00:36:29.924 --rc genhtml_legend=1 00:36:29.924 --rc geninfo_all_blocks=1 00:36:29.924 --rc geninfo_unexecuted_blocks=1 00:36:29.924 00:36:29.924 ' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.924 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:29.925 01:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:32.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:32.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:32.456 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:32.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:32.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:32.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:36:32.457 00:36:32.457 --- 10.0.0.2 ping statistics --- 00:36:32.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.457 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:36:32.457 00:36:32.457 --- 10.0.0.1 ping statistics --- 00:36:32.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.457 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=420398 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 420398 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 420398 ']' 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:32.457 [2024-12-07 01:03:48.250622] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:32.457 [2024-12-07 01:03:48.251725] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:36:32.457 [2024-12-07 01:03:48.251776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:32.457 [2024-12-07 01:03:48.324224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:32.457 [2024-12-07 01:03:48.371360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.457 [2024-12-07 01:03:48.371413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.457 [2024-12-07 01:03:48.371436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.457 [2024-12-07 01:03:48.371447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.457 [2024-12-07 01:03:48.371456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.457 [2024-12-07 01:03:48.373105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:32.457 [2024-12-07 01:03:48.373159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.457 [2024-12-07 01:03:48.373155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:32.457 [2024-12-07 01:03:48.460376] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:32.457 [2024-12-07 01:03:48.460593] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:32.457 [2024-12-07 01:03:48.460608] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:32.457 [2024-12-07 01:03:48.460840] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:32.457 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:32.715 [2024-12-07 01:03:48.765807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.715 01:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:32.973 01:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:33.231 [2024-12-07 01:03:49.318160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.231 01:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:33.488 01:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:33.746 Malloc0 00:36:34.003 01:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:34.261 Delay0 00:36:34.261 01:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.518 01:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:34.775 NULL1 00:36:34.775 01:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:35.033 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=420808 00:36:35.033 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:35.033 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:35.033 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.290 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.548 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:35.548 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:35.806 true 00:36:35.806 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:35.806 01:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.064 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.321 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:36.321 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:36.579 true 00:36:36.579 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:36.579 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.837 01:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.094 01:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:37.094 01:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:37.352 true 00:36:37.352 01:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:37.352 01:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.286 Read completed with error (sct=0, sc=11) 00:36:38.543 01:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.800 01:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:38.800 01:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:39.058 true 00:36:39.058 01:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:39.058 01:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.315 01:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.572 01:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:39.572 01:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:39.829 true 00:36:39.829 01:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:39.829 01:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.086 01:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.344 01:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:40.344 01:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:40.601 true 00:36:40.601 01:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:40.601 01:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.534 01:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.792 01:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:41.792 01:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:42.049 true 00:36:42.049 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:42.049 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.306 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.564 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:42.564 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:42.822 true 00:36:42.822 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:42.822 01:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.755 01:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:43.755 01:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:43.755 01:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:44.320 true 00:36:44.320 01:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:44.320 01:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.320 01:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.577 01:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:44.577 01:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:45.141 true 00:36:45.141 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:45.141 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.141 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.399 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:45.399 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:45.655 true 00:36:45.656 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:45.656 01:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.588 01:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.103 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:47.103 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:47.361 true 00:36:47.361 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:47.361 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.618 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.877 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:47.877 01:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:48.134 true 00:36:48.134 01:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:48.134 01:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:49.068 01:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.325 01:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:49.325 01:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:49.582 true 00:36:49.582 01:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:49.582 01:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.859 01:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.116 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:50.116 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:50.375 true 00:36:50.375 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:50.375 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.632 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.890 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:50.890 01:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:51.148 true 00:36:51.148 01:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:51.148 01:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.079 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.335 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:52.335 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:52.592 true 00:36:52.592 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:52.592 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.849 01:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.106 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:53.106 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:53.363 true 00:36:53.363 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:53.363 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.621 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.880 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:53.880 01:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:54.138 true 00:36:54.138 01:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:54.138 01:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.077 01:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.335 01:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:55.335 01:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:55.901 true 00:36:55.901 01:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:55.901 01:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.901 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.159 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:56.159 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:56.418 true 00:36:56.418 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:56.418 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.988 01:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.988 01:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:56.988 01:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:57.246 true 00:36:57.246 01:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:57.246 01:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.620 01:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.620 01:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:58.620 01:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:58.878 true 00:36:58.878 01:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:58.878 01:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.171 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.428 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:59.428 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:59.686 true 00:36:59.686 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:36:59.686 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.944 01:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.202 01:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:00.202 01:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:00.461 true 00:37:00.461 01:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:00.461 01:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.397 01:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.656 01:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:01.656 01:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:01.914 true 00:37:01.914 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:01.914 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.173 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.739 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:02.739 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:02.739 true 00:37:02.739 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:02.739 01:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.997 01:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.256 01:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:03.256 01:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:03.514 true 00:37:03.773 01:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:03.773 01:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.707 01:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.966 01:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:04.966 01:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:05.224 true 00:37:05.224 01:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:05.224 01:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.482 01:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.482 Initializing NVMe Controllers 00:37:05.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:05.482 Controller IO queue size 128, less than required. 00:37:05.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:05.482 Controller IO queue size 128, less than required. 00:37:05.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:05.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:05.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:05.482 Initialization complete. Launching workers. 00:37:05.482 ======================================================== 00:37:05.482 Latency(us) 00:37:05.482 Device Information : IOPS MiB/s Average min max 00:37:05.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 445.41 0.22 107049.81 3237.67 1013249.51 00:37:05.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7815.52 3.82 16328.86 2617.92 396920.69 00:37:05.482 ======================================================== 00:37:05.482 Total : 8260.93 4.03 21220.31 2617.92 1013249.51 00:37:05.482 00:37:05.740 01:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:05.740 01:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:05.998 true 00:37:05.998 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 420808 00:37:05.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (420808) - No such process 00:37:05.998 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 420808 00:37:05.998 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.256 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.514 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:06.514 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:06.514 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:06.514 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.514 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:06.773 null0 00:37:06.773 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.773 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.773 01:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:07.031 null1 00:37:07.031 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.031 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.031 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:07.290 null2 00:37:07.290 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.290 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.290 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:07.548 null3 00:37:07.548 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.548 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.548 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:07.808 null4 00:37:07.808 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:07.808 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:07.808 01:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:08.066 null5 00:37:08.066 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:08.066 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:08.066 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:08.324 null6 00:37:08.324 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:08.324 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:08.324 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:08.583 null7 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.583 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 424813 424814 424816 424818 424820 424822 424824 424826 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.584 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.844 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.844 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.101 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.101 01:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.101 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.101 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.101 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.101 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.358 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.359 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.616 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.616 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.616 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.617 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.617 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.617 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.617 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.617 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.874 01:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.131 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.132 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.132 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.389 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.955 01:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.214 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:11.471 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:11.728 01:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:11.985 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.242 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:12.243 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:12.502 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.094 01:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:13.095 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:13.095 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:13.095 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:13.095 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:13.377 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.377 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:13.377 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:13.377 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:13.658 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:13.659 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:13.917 01:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.174 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:14.431 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.689 rmmod nvme_tcp 00:37:14.689 rmmod nvme_fabrics 00:37:14.689 rmmod nvme_keyring 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 420398 ']' 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 420398 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 420398 ']' 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 420398 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.689 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420398 00:37:14.947 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:14.947 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:14.947 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420398' 00:37:14.947 killing process with pid 420398 00:37:14.947 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 420398 00:37:14.947 01:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 420398 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.947 01:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.483 00:37:17.483 real 0m47.353s 00:37:17.483 user 3m19.002s 00:37:17.483 sys 0m22.047s 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:17.483 ************************************ 00:37:17.483 END TEST nvmf_ns_hotplug_stress 00:37:17.483 ************************************ 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:17.483 ************************************ 00:37:17.483 START TEST nvmf_delete_subsystem 00:37:17.483 ************************************ 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:17.483 * Looking for test storage... 00:37:17.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.483 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.484 --rc genhtml_branch_coverage=1 00:37:17.484 --rc genhtml_function_coverage=1 00:37:17.484 --rc genhtml_legend=1 00:37:17.484 --rc geninfo_all_blocks=1 00:37:17.484 --rc geninfo_unexecuted_blocks=1 00:37:17.484 00:37:17.484 ' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.484 --rc genhtml_branch_coverage=1 00:37:17.484 --rc genhtml_function_coverage=1 00:37:17.484 --rc genhtml_legend=1 00:37:17.484 --rc geninfo_all_blocks=1 00:37:17.484 --rc geninfo_unexecuted_blocks=1 00:37:17.484 00:37:17.484 ' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.484 --rc genhtml_branch_coverage=1 00:37:17.484 --rc genhtml_function_coverage=1 00:37:17.484 --rc genhtml_legend=1 00:37:17.484 --rc geninfo_all_blocks=1 00:37:17.484 --rc geninfo_unexecuted_blocks=1 00:37:17.484 00:37:17.484 ' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:17.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.484 --rc genhtml_branch_coverage=1 00:37:17.484 --rc genhtml_function_coverage=1 00:37:17.484 --rc genhtml_legend=1 00:37:17.484 --rc geninfo_all_blocks=1 00:37:17.484 --rc geninfo_unexecuted_blocks=1 00:37:17.484 00:37:17.484 ' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:17.484 01:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:19.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.387 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:19.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:19.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:19.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:19.388 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:19.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:37:19.645 00:37:19.645 --- 10.0.0.2 ping statistics --- 00:37:19.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.645 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:19.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:37:19.645 00:37:19.645 --- 10.0.0.1 ping statistics --- 00:37:19.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.645 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=427583 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 427583 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 427583 ']' 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:19.645 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.645 [2024-12-07 01:04:35.664823] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:19.645 [2024-12-07 01:04:35.665873] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:19.645 [2024-12-07 01:04:35.665925] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.645 [2024-12-07 01:04:35.741505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:19.645 [2024-12-07 01:04:35.789090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.646 [2024-12-07 01:04:35.789146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.646 [2024-12-07 01:04:35.789159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.646 [2024-12-07 01:04:35.789171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.646 [2024-12-07 01:04:35.789180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.646 [2024-12-07 01:04:35.790592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.646 [2024-12-07 01:04:35.790597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.903 [2024-12-07 01:04:35.875579] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:19.903 [2024-12-07 01:04:35.875624] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:19.903 [2024-12-07 01:04:35.875843] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.903 [2024-12-07 01:04:35.939356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.903 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.904 [2024-12-07 01:04:35.955513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.904 NULL1 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.904 Delay0 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=427718 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:19.904 01:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:19.904 [2024-12-07 01:04:36.031760] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:22.431 01:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.431 01:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.431 01:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 [2024-12-07 01:04:38.206645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff25800d4b0 is same with the state(6) to be set 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 [2024-12-07 01:04:38.207379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff258000c40 is same with the state(6) to be set 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 starting I/O failed: -6 00:37:22.431 Write completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.431 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 starting I/O failed: -6 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 [2024-12-07 01:04:38.207825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3150 is same with the state(6) to be set 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Read completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:22.432 Write completed with error (sct=0, sc=8) 00:37:23.366 [2024-12-07 01:04:39.170086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1190 is same with the state(6) to be set 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 [2024-12-07 01:04:39.206113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3330 is same with the state(6) to be set 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 [2024-12-07 01:04:39.206342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc2f70 is same with the state(6) to be set 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 [2024-12-07 01:04:39.206727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff25800d7e0 is same with the state(6) to be set 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Read completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 Write completed with error (sct=0, sc=8) 00:37:23.366 [2024-12-07 01:04:39.206895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff25800d020 is same with the state(6) to be set 00:37:23.366 Initializing NVMe Controllers 00:37:23.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:23.366 Controller IO queue size 128, less than required. 00:37:23.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:23.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:23.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:23.366 Initialization complete. Launching workers. 00:37:23.367 ======================================================== 00:37:23.367 Latency(us) 00:37:23.367 Device Information : IOPS MiB/s Average min max 00:37:23.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 147.88 0.07 952649.99 361.94 1011544.27 00:37:23.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.75 0.08 905897.15 713.88 1011956.91 00:37:23.367 ======================================================== 00:37:23.367 Total : 312.62 0.15 928011.99 361.94 1011956.91 00:37:23.367 00:37:23.367 [2024-12-07 01:04:39.207718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1190 (9): Bad file descriptor 00:37:23.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:23.367 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.367 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:23.367 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 427718 00:37:23.367 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 427718 00:37:23.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (427718) - No such process 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 427718 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 427718 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 427718 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:23.625 [2024-12-07 01:04:39.727517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=428123 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:23.625 01:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:23.884 [2024-12-07 01:04:39.796549] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:24.142 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:24.142 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:24.142 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:24.707 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:24.707 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:24.707 01:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:25.273 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:25.274 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:25.274 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:25.839 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:25.839 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:25.839 01:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:26.404 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:26.404 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:26.404 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:26.661 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:26.661 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:26.661 01:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:26.919 Initializing NVMe Controllers 00:37:26.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:26.919 Controller IO queue size 128, less than required. 00:37:26.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:26.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:26.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:26.919 Initialization complete. Launching workers. 00:37:26.919 ======================================================== 00:37:26.919 Latency(us) 00:37:26.919 Device Information : IOPS MiB/s Average min max 00:37:26.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003682.31 1000224.82 1042595.46 00:37:26.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006408.53 1000417.85 1042714.17 00:37:26.919 ======================================================== 00:37:26.919 Total : 256.00 0.12 1005045.42 1000224.82 1042714.17 00:37:26.919 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 428123 00:37:27.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (428123) - No such process 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 428123 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.177 rmmod nvme_tcp 00:37:27.177 rmmod nvme_fabrics 00:37:27.177 rmmod nvme_keyring 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 427583 ']' 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 427583 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 427583 ']' 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 427583 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.177 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427583 00:37:27.436 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.436 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.436 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427583' 00:37:27.436 killing process with pid 427583 00:37:27.436 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 427583 00:37:27.436 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 427583 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.437 01:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.969 00:37:29.969 real 0m12.448s 00:37:29.969 user 0m24.713s 00:37:29.969 sys 0m3.745s 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.969 ************************************ 00:37:29.969 END TEST nvmf_delete_subsystem 00:37:29.969 ************************************ 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.969 ************************************ 00:37:29.969 START TEST nvmf_host_management 00:37:29.969 ************************************ 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:29.969 * Looking for test storage... 00:37:29.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.969 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:29.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.970 --rc genhtml_branch_coverage=1 00:37:29.970 --rc genhtml_function_coverage=1 00:37:29.970 --rc genhtml_legend=1 00:37:29.970 --rc geninfo_all_blocks=1 00:37:29.970 --rc geninfo_unexecuted_blocks=1 00:37:29.970 00:37:29.970 ' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:29.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.970 --rc genhtml_branch_coverage=1 00:37:29.970 --rc genhtml_function_coverage=1 00:37:29.970 --rc genhtml_legend=1 00:37:29.970 --rc geninfo_all_blocks=1 00:37:29.970 --rc geninfo_unexecuted_blocks=1 00:37:29.970 00:37:29.970 ' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:29.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.970 --rc genhtml_branch_coverage=1 00:37:29.970 --rc genhtml_function_coverage=1 00:37:29.970 --rc genhtml_legend=1 00:37:29.970 --rc geninfo_all_blocks=1 00:37:29.970 --rc geninfo_unexecuted_blocks=1 00:37:29.970 00:37:29.970 ' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:29.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.970 --rc genhtml_branch_coverage=1 00:37:29.970 --rc genhtml_function_coverage=1 00:37:29.970 --rc genhtml_legend=1 00:37:29.970 --rc geninfo_all_blocks=1 00:37:29.970 --rc geninfo_unexecuted_blocks=1 00:37:29.970 00:37:29.970 ' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.970 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.971 01:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:31.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:31.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:31.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:31.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.871 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.872 01:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:32.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:32.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:37:32.130 00:37:32.130 --- 10.0.0.2 ping statistics --- 00:37:32.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.130 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:32.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:32.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:37:32.130 00:37:32.130 --- 10.0.0.1 ping statistics --- 00:37:32.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.130 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=430457 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 430457 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 430457 ']' 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.130 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.131 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.131 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.131 [2024-12-07 01:04:48.129188] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:32.131 [2024-12-07 01:04:48.130231] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:32.131 [2024-12-07 01:04:48.130300] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.131 [2024-12-07 01:04:48.203859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:32.131 [2024-12-07 01:04:48.253119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.131 [2024-12-07 01:04:48.253176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.131 [2024-12-07 01:04:48.253190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.131 [2024-12-07 01:04:48.253202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.131 [2024-12-07 01:04:48.253212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.131 [2024-12-07 01:04:48.255082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:32.131 [2024-12-07 01:04:48.255135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:32.131 [2024-12-07 01:04:48.255188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:32.131 [2024-12-07 01:04:48.255192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.389 [2024-12-07 01:04:48.344842] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:32.389 [2024-12-07 01:04:48.345096] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:32.389 [2024-12-07 01:04:48.345381] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:32.389 [2024-12-07 01:04:48.345955] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:32.389 [2024-12-07 01:04:48.346219] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.389 [2024-12-07 01:04:48.395881] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:32.389 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.390 Malloc0 00:37:32.390 [2024-12-07 01:04:48.464038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=430619 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 430619 /var/tmp/bdevperf.sock 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 430619 ']' 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:32.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:32.390 { 00:37:32.390 "params": { 00:37:32.390 "name": "Nvme$subsystem", 00:37:32.390 "trtype": "$TEST_TRANSPORT", 00:37:32.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:32.390 "adrfam": "ipv4", 00:37:32.390 "trsvcid": "$NVMF_PORT", 00:37:32.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:32.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:32.390 "hdgst": ${hdgst:-false}, 00:37:32.390 "ddgst": ${ddgst:-false} 00:37:32.390 }, 00:37:32.390 "method": "bdev_nvme_attach_controller" 00:37:32.390 } 00:37:32.390 EOF 00:37:32.390 )") 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:32.390 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:32.390 "params": { 00:37:32.390 "name": "Nvme0", 00:37:32.390 "trtype": "tcp", 00:37:32.390 "traddr": "10.0.0.2", 00:37:32.390 "adrfam": "ipv4", 00:37:32.390 "trsvcid": "4420", 00:37:32.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.390 "hdgst": false, 00:37:32.390 "ddgst": false 00:37:32.390 }, 00:37:32.390 "method": "bdev_nvme_attach_controller" 00:37:32.390 }' 00:37:32.648 [2024-12-07 01:04:48.540106] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:32.648 [2024-12-07 01:04:48.540192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430619 ] 00:37:32.648 [2024-12-07 01:04:48.611487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.648 [2024-12-07 01:04:48.658885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.906 Running I/O for 10 seconds... 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:32.906 01:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=561 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 561 -ge 100 ']' 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:33.166 [2024-12-07 01:04:49.255892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f664f0 is same with the state(6) to be set 00:37:33.166 [2024-12-07 01:04:49.255956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f664f0 is same with the state(6) to be set 00:37:33.166 [2024-12-07 01:04:49.255971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f664f0 is same with the state(6) to be set 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.166 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:33.166 [2024-12-07 01:04:49.261711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.166 [2024-12-07 01:04:49.261753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.166 [2024-12-07 01:04:49.261771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.167 [2024-12-07 01:04:49.261786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.261799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.167 [2024-12-07 01:04:49.261813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.261827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.167 [2024-12-07 01:04:49.261840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.261853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb980 is same with the state(6) to be set 00:37:33.167 [2024-12-07 01:04:49.267098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.267966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.267993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.268018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.268050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.167 [2024-12-07 01:04:49.268084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.268115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.268145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.167 [2024-12-07 01:04:49.268174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.167 [2024-12-07 01:04:49.268190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 01:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:33.168 [2024-12-07 01:04:49.268223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.268965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.268986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.269011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.269027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.269043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.269057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.269072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.168 [2024-12-07 01:04:49.269086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.168 [2024-12-07 01:04:49.270267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:33.168 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:33.168 00:37:33.168 Latency(us) 00:37:33.168 [2024-12-07T00:04:49.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.168 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:33.168 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:33.168 Verification LBA range: start 0x0 length 0x400 00:37:33.168 Nvme0n1 : 0.40 1615.30 100.96 161.53 0.00 34967.35 2439.40 33787.45 00:37:33.168 [2024-12-07T00:04:49.319Z] =================================================================================================================== 00:37:33.168 [2024-12-07T00:04:49.319Z] Total : 1615.30 100.96 161.53 0.00 34967.35 2439.40 33787.45 00:37:33.168 [2024-12-07 01:04:49.272177] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:33.168 [2024-12-07 01:04:49.272208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fb980 (9): Bad file descriptor 00:37:33.427 [2024-12-07 01:04:49.364162] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 430619 00:37:34.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (430619) - No such process 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.361 { 00:37:34.361 "params": { 00:37:34.361 "name": "Nvme$subsystem", 00:37:34.361 "trtype": "$TEST_TRANSPORT", 00:37:34.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.361 "adrfam": "ipv4", 00:37:34.361 "trsvcid": "$NVMF_PORT", 00:37:34.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.361 "hdgst": ${hdgst:-false}, 00:37:34.361 "ddgst": ${ddgst:-false} 00:37:34.361 }, 00:37:34.361 "method": "bdev_nvme_attach_controller" 00:37:34.361 } 00:37:34.361 EOF 00:37:34.361 )") 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:34.361 01:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.361 "params": { 00:37:34.361 "name": "Nvme0", 00:37:34.361 "trtype": "tcp", 00:37:34.361 "traddr": "10.0.0.2", 00:37:34.361 "adrfam": "ipv4", 00:37:34.361 "trsvcid": "4420", 00:37:34.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.361 "hdgst": false, 00:37:34.361 "ddgst": false 00:37:34.361 }, 00:37:34.361 "method": "bdev_nvme_attach_controller" 00:37:34.361 }' 00:37:34.361 [2024-12-07 01:04:50.321568] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:34.361 [2024-12-07 01:04:50.321644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430773 ] 00:37:34.361 [2024-12-07 01:04:50.393283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.361 [2024-12-07 01:04:50.442265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.619 Running I/O for 1 seconds... 00:37:35.554 1536.00 IOPS, 96.00 MiB/s 00:37:35.554 Latency(us) 00:37:35.554 [2024-12-07T00:04:51.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.554 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:35.554 Verification LBA range: start 0x0 length 0x400 00:37:35.554 Nvme0n1 : 1.02 1563.62 97.73 0.00 0.00 40280.09 5752.60 35923.44 00:37:35.554 [2024-12-07T00:04:51.705Z] =================================================================================================================== 00:37:35.554 [2024-12-07T00:04:51.705Z] Total : 1563.62 97.73 0.00 0.00 40280.09 5752.60 35923.44 00:37:35.812 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.813 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.813 rmmod nvme_tcp 00:37:35.813 rmmod nvme_fabrics 00:37:35.813 rmmod nvme_keyring 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 430457 ']' 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 430457 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 430457 ']' 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 430457 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.071 01:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430457 00:37:36.071 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:36.071 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:36.071 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430457' 00:37:36.071 killing process with pid 430457 00:37:36.071 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 430457 00:37:36.071 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 430457 00:37:36.071 [2024-12-07 01:04:52.211603] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.331 01:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:38.229 00:37:38.229 real 0m8.652s 00:37:38.229 user 0m16.801s 00:37:38.229 sys 0m3.757s 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:38.229 ************************************ 00:37:38.229 END TEST nvmf_host_management 00:37:38.229 ************************************ 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:38.229 ************************************ 00:37:38.229 START TEST nvmf_lvol 00:37:38.229 ************************************ 00:37:38.229 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:38.488 * Looking for test storage... 00:37:38.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:38.488 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.489 --rc genhtml_branch_coverage=1 00:37:38.489 --rc genhtml_function_coverage=1 00:37:38.489 --rc genhtml_legend=1 00:37:38.489 --rc geninfo_all_blocks=1 00:37:38.489 --rc geninfo_unexecuted_blocks=1 00:37:38.489 00:37:38.489 ' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.489 --rc genhtml_branch_coverage=1 00:37:38.489 --rc genhtml_function_coverage=1 00:37:38.489 --rc genhtml_legend=1 00:37:38.489 --rc geninfo_all_blocks=1 00:37:38.489 --rc geninfo_unexecuted_blocks=1 00:37:38.489 00:37:38.489 ' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.489 --rc genhtml_branch_coverage=1 00:37:38.489 --rc genhtml_function_coverage=1 00:37:38.489 --rc genhtml_legend=1 00:37:38.489 --rc geninfo_all_blocks=1 00:37:38.489 --rc geninfo_unexecuted_blocks=1 00:37:38.489 00:37:38.489 ' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:38.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.489 --rc genhtml_branch_coverage=1 00:37:38.489 --rc genhtml_function_coverage=1 00:37:38.489 --rc genhtml_legend=1 00:37:38.489 --rc geninfo_all_blocks=1 00:37:38.489 --rc geninfo_unexecuted_blocks=1 00:37:38.489 00:37:38.489 ' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.489 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.490 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:38.490 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:38.490 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.490 01:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:41.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:41.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:41.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.022 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:41.022 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:41.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:41.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:37:41.023 00:37:41.023 --- 10.0.0.2 ping statistics --- 00:37:41.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.023 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:41.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:41.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:37:41.023 00:37:41.023 --- 10.0.0.1 ping statistics --- 00:37:41.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.023 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=432979 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 432979 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 432979 ']' 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.023 01:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:41.023 [2024-12-07 01:04:57.013332] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:41.023 [2024-12-07 01:04:57.014380] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:41.023 [2024-12-07 01:04:57.014435] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.023 [2024-12-07 01:04:57.086802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:41.023 [2024-12-07 01:04:57.132064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.023 [2024-12-07 01:04:57.132120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.023 [2024-12-07 01:04:57.132148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.023 [2024-12-07 01:04:57.132160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.023 [2024-12-07 01:04:57.132170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.023 [2024-12-07 01:04:57.133645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.023 [2024-12-07 01:04:57.133710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:41.023 [2024-12-07 01:04:57.133713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.280 [2024-12-07 01:04:57.220240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:41.280 [2024-12-07 01:04:57.220435] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:41.280 [2024-12-07 01:04:57.220469] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:41.280 [2024-12-07 01:04:57.220696] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.280 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:41.538 [2024-12-07 01:04:57.530419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.538 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:41.796 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:41.796 01:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:42.053 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:42.053 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:42.312 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:42.571 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e8192b1-bf15-40a3-a09f-0ebd4e71d467 00:37:42.571 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e8192b1-bf15-40a3-a09f-0ebd4e71d467 lvol 20 00:37:42.829 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e167631f-aa06-428f-a5c7-ab6959a76649 00:37:42.829 01:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:43.395 01:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e167631f-aa06-428f-a5c7-ab6959a76649 00:37:43.395 01:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:43.654 [2024-12-07 01:04:59.770556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.654 01:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:43.913 01:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=433400 00:37:43.913 01:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:43.913 01:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:45.290 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e167631f-aa06-428f-a5c7-ab6959a76649 MY_SNAPSHOT 00:37:45.290 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ef12cd04-ce0c-4d16-8236-facb4f729134 00:37:45.290 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e167631f-aa06-428f-a5c7-ab6959a76649 30 00:37:45.549 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ef12cd04-ce0c-4d16-8236-facb4f729134 MY_CLONE 00:37:46.115 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3541b417-b5c2-414e-90af-0e160d3c48ce 00:37:46.115 01:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3541b417-b5c2-414e-90af-0e160d3c48ce 00:37:46.678 01:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 433400 00:37:54.782 Initializing NVMe Controllers 00:37:54.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:54.783 Controller IO queue size 128, less than required. 00:37:54.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:54.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:54.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:54.783 Initialization complete. Launching workers. 00:37:54.783 ======================================================== 00:37:54.783 Latency(us) 00:37:54.783 Device Information : IOPS MiB/s Average min max 00:37:54.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9834.00 38.41 13019.85 2797.52 81576.86 00:37:54.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10670.70 41.68 11998.24 1892.57 90939.19 00:37:54.783 ======================================================== 00:37:54.783 Total : 20504.70 80.10 12488.20 1892.57 90939.19 00:37:54.783 00:37:54.783 01:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.783 01:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e167631f-aa06-428f-a5c7-ab6959a76649 00:37:54.783 01:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e8192b1-bf15-40a3-a09f-0ebd4e71d467 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.349 rmmod nvme_tcp 00:37:55.349 rmmod nvme_fabrics 00:37:55.349 rmmod nvme_keyring 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 432979 ']' 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 432979 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 432979 ']' 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 432979 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432979 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432979' 00:37:55.349 killing process with pid 432979 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 432979 00:37:55.349 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 432979 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:55.609 01:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:57.510 00:37:57.510 real 0m19.286s 00:37:57.510 user 0m55.449s 00:37:57.510 sys 0m8.212s 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:57.510 ************************************ 00:37:57.510 END TEST nvmf_lvol 00:37:57.510 ************************************ 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.510 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:57.768 ************************************ 00:37:57.768 START TEST nvmf_lvs_grow 00:37:57.768 ************************************ 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:57.768 * Looking for test storage... 00:37:57.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:57.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.768 --rc genhtml_branch_coverage=1 00:37:57.768 --rc genhtml_function_coverage=1 00:37:57.768 --rc genhtml_legend=1 00:37:57.768 --rc geninfo_all_blocks=1 00:37:57.768 --rc geninfo_unexecuted_blocks=1 00:37:57.768 00:37:57.768 ' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:57.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.768 --rc genhtml_branch_coverage=1 00:37:57.768 --rc genhtml_function_coverage=1 00:37:57.768 --rc genhtml_legend=1 00:37:57.768 --rc geninfo_all_blocks=1 00:37:57.768 --rc geninfo_unexecuted_blocks=1 00:37:57.768 00:37:57.768 ' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:57.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.768 --rc genhtml_branch_coverage=1 00:37:57.768 --rc genhtml_function_coverage=1 00:37:57.768 --rc genhtml_legend=1 00:37:57.768 --rc geninfo_all_blocks=1 00:37:57.768 --rc geninfo_unexecuted_blocks=1 00:37:57.768 00:37:57.768 ' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:57.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.768 --rc genhtml_branch_coverage=1 00:37:57.768 --rc genhtml_function_coverage=1 00:37:57.768 --rc genhtml_legend=1 00:37:57.768 --rc geninfo_all_blocks=1 00:37:57.768 --rc geninfo_unexecuted_blocks=1 00:37:57.768 00:37:57.768 ' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.768 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:57.769 01:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:00.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:00.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.297 01:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:00.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.297 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:00.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:00.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:38:00.298 00:38:00.298 --- 10.0.0.2 ping statistics --- 00:38:00.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.298 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:00.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:38:00.298 00:38:00.298 --- 10.0.0.1 ping statistics --- 00:38:00.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.298 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=436654 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 436654 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 436654 ']' 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.298 [2024-12-07 01:05:16.212027] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:00.298 [2024-12-07 01:05:16.213077] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:00.298 [2024-12-07 01:05:16.213141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.298 [2024-12-07 01:05:16.284812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.298 [2024-12-07 01:05:16.328141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.298 [2024-12-07 01:05:16.328201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.298 [2024-12-07 01:05:16.328229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.298 [2024-12-07 01:05:16.328240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.298 [2024-12-07 01:05:16.328249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.298 [2024-12-07 01:05:16.328814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.298 [2024-12-07 01:05:16.411256] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:00.298 [2024-12-07 01:05:16.411552] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.298 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.556 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.556 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:00.814 [2024-12-07 01:05:16.725441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:00.814 ************************************ 00:38:00.814 START TEST lvs_grow_clean 00:38:00.814 ************************************ 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.814 01:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:01.072 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:01.072 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:01.330 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d051584f-48ce-43c2-83ff-1a699fe29724 00:38:01.330 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:01.330 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:01.588 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:01.588 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:01.588 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d051584f-48ce-43c2-83ff-1a699fe29724 lvol 150 00:38:01.847 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=658737db-76e7-4785-ae0b-bbf3f69f0075 00:38:01.847 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:01.847 01:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:02.104 [2024-12-07 01:05:18.149345] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:02.104 [2024-12-07 01:05:18.149455] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:02.104 true 00:38:02.104 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:02.104 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:02.362 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:02.362 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:02.619 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 658737db-76e7-4785-ae0b-bbf3f69f0075 00:38:02.878 01:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:03.136 [2024-12-07 01:05:19.241680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.136 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=437089 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 437089 /var/tmp/bdevperf.sock 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 437089 ']' 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:03.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:03.394 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.652 [2024-12-07 01:05:19.574605] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:03.652 [2024-12-07 01:05:19.574702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437089 ] 00:38:03.652 [2024-12-07 01:05:19.644046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.652 [2024-12-07 01:05:19.694382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.910 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:03.910 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:03.911 01:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:04.168 Nvme0n1 00:38:04.168 01:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:04.426 [ 00:38:04.426 { 00:38:04.426 "name": "Nvme0n1", 00:38:04.426 "aliases": [ 00:38:04.426 "658737db-76e7-4785-ae0b-bbf3f69f0075" 00:38:04.426 ], 00:38:04.426 "product_name": "NVMe disk", 00:38:04.426 "block_size": 4096, 00:38:04.426 "num_blocks": 38912, 00:38:04.426 "uuid": "658737db-76e7-4785-ae0b-bbf3f69f0075", 00:38:04.426 "numa_id": 0, 00:38:04.426 "assigned_rate_limits": { 00:38:04.426 "rw_ios_per_sec": 0, 00:38:04.426 "rw_mbytes_per_sec": 0, 00:38:04.426 "r_mbytes_per_sec": 0, 00:38:04.426 "w_mbytes_per_sec": 0 00:38:04.426 }, 00:38:04.426 "claimed": false, 00:38:04.426 "zoned": false, 00:38:04.426 "supported_io_types": { 00:38:04.426 "read": true, 00:38:04.426 "write": true, 00:38:04.426 "unmap": true, 00:38:04.426 "flush": true, 00:38:04.426 "reset": true, 00:38:04.426 "nvme_admin": true, 00:38:04.426 "nvme_io": true, 00:38:04.426 "nvme_io_md": false, 00:38:04.426 "write_zeroes": true, 00:38:04.426 "zcopy": false, 00:38:04.426 "get_zone_info": false, 00:38:04.426 "zone_management": false, 00:38:04.426 "zone_append": false, 00:38:04.426 "compare": true, 00:38:04.426 "compare_and_write": true, 00:38:04.426 "abort": true, 00:38:04.426 "seek_hole": false, 00:38:04.426 "seek_data": false, 00:38:04.426 "copy": true, 00:38:04.426 "nvme_iov_md": false 00:38:04.426 }, 00:38:04.426 "memory_domains": [ 00:38:04.426 { 00:38:04.426 "dma_device_id": "system", 00:38:04.426 "dma_device_type": 1 00:38:04.426 } 00:38:04.426 ], 00:38:04.426 "driver_specific": { 00:38:04.426 "nvme": [ 00:38:04.426 { 00:38:04.426 "trid": { 00:38:04.426 "trtype": "TCP", 00:38:04.426 "adrfam": "IPv4", 00:38:04.426 "traddr": "10.0.0.2", 00:38:04.426 "trsvcid": "4420", 00:38:04.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:04.426 }, 00:38:04.426 "ctrlr_data": { 00:38:04.426 "cntlid": 1, 00:38:04.426 "vendor_id": "0x8086", 00:38:04.426 "model_number": "SPDK bdev Controller", 00:38:04.426 "serial_number": "SPDK0", 00:38:04.426 "firmware_revision": "25.01", 00:38:04.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.426 "oacs": { 00:38:04.426 "security": 0, 00:38:04.426 "format": 0, 00:38:04.426 "firmware": 0, 00:38:04.426 "ns_manage": 0 00:38:04.426 }, 00:38:04.426 "multi_ctrlr": true, 00:38:04.426 "ana_reporting": false 00:38:04.426 }, 00:38:04.426 "vs": { 00:38:04.426 "nvme_version": "1.3" 00:38:04.426 }, 00:38:04.426 "ns_data": { 00:38:04.426 "id": 1, 00:38:04.426 "can_share": true 00:38:04.426 } 00:38:04.426 } 00:38:04.426 ], 00:38:04.426 "mp_policy": "active_passive" 00:38:04.426 } 00:38:04.426 } 00:38:04.426 ] 00:38:04.426 01:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=437222 00:38:04.426 01:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:04.426 01:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:04.426 Running I/O for 10 seconds... 00:38:05.801 Latency(us) 00:38:05.801 [2024-12-07T00:05:21.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.801 Nvme0n1 : 1.00 11811.00 46.14 0.00 0.00 0.00 0.00 0.00 00:38:05.801 [2024-12-07T00:05:21.952Z] =================================================================================================================== 00:38:05.801 [2024-12-07T00:05:21.952Z] Total : 11811.00 46.14 0.00 0.00 0.00 0.00 0.00 00:38:05.801 00:38:06.370 01:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:06.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.626 Nvme0n1 : 2.00 12573.00 49.11 0.00 0.00 0.00 0.00 0.00 00:38:06.626 [2024-12-07T00:05:22.777Z] =================================================================================================================== 00:38:06.626 [2024-12-07T00:05:22.777Z] Total : 12573.00 49.11 0.00 0.00 0.00 0.00 0.00 00:38:06.626 00:38:06.626 true 00:38:06.626 01:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:06.626 01:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:06.883 01:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:06.883 01:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:06.883 01:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 437222 00:38:07.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.447 Nvme0n1 : 3.00 13504.33 52.75 0.00 0.00 0.00 0.00 0.00 00:38:07.447 [2024-12-07T00:05:23.598Z] =================================================================================================================== 00:38:07.447 [2024-12-07T00:05:23.598Z] Total : 13504.33 52.75 0.00 0.00 0.00 0.00 0.00 00:38:07.447 00:38:08.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.817 Nvme0n1 : 4.00 14001.75 54.69 0.00 0.00 0.00 0.00 0.00 00:38:08.818 [2024-12-07T00:05:24.969Z] =================================================================================================================== 00:38:08.818 [2024-12-07T00:05:24.969Z] Total : 14001.75 54.69 0.00 0.00 0.00 0.00 0.00 00:38:08.818 00:38:09.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.750 Nvme0n1 : 5.00 14325.60 55.96 0.00 0.00 0.00 0.00 0.00 00:38:09.750 [2024-12-07T00:05:25.901Z] =================================================================================================================== 00:38:09.750 [2024-12-07T00:05:25.901Z] Total : 14325.60 55.96 0.00 0.00 0.00 0.00 0.00 00:38:09.750 00:38:10.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.790 Nvme0n1 : 6.00 14562.67 56.89 0.00 0.00 0.00 0.00 0.00 00:38:10.790 [2024-12-07T00:05:26.941Z] =================================================================================================================== 00:38:10.790 [2024-12-07T00:05:26.941Z] Total : 14562.67 56.89 0.00 0.00 0.00 0.00 0.00 00:38:10.790 00:38:11.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.725 Nvme0n1 : 7.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:38:11.725 [2024-12-07T00:05:27.876Z] =================================================================================================================== 00:38:11.725 [2024-12-07T00:05:27.876Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:38:11.725 00:38:12.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.660 Nvme0n1 : 8.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:12.660 [2024-12-07T00:05:28.811Z] =================================================================================================================== 00:38:12.660 [2024-12-07T00:05:28.811Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:38:12.660 00:38:13.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.609 Nvme0n1 : 9.00 14961.56 58.44 0.00 0.00 0.00 0.00 0.00 00:38:13.609 [2024-12-07T00:05:29.760Z] =================================================================================================================== 00:38:13.609 [2024-12-07T00:05:29.760Z] Total : 14961.56 58.44 0.00 0.00 0.00 0.00 0.00 00:38:13.609 00:38:14.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.545 Nvme0n1 : 10.00 15014.80 58.65 0.00 0.00 0.00 0.00 0.00 00:38:14.545 [2024-12-07T00:05:30.696Z] =================================================================================================================== 00:38:14.545 [2024-12-07T00:05:30.696Z] Total : 15014.80 58.65 0.00 0.00 0.00 0.00 0.00 00:38:14.545 00:38:14.545 00:38:14.545 Latency(us) 00:38:14.545 [2024-12-07T00:05:30.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.545 Nvme0n1 : 10.01 15019.64 58.67 0.00 0.00 8517.52 4490.43 22622.06 00:38:14.545 [2024-12-07T00:05:30.696Z] =================================================================================================================== 00:38:14.545 [2024-12-07T00:05:30.696Z] Total : 15019.64 58.67 0.00 0.00 8517.52 4490.43 22622.06 00:38:14.545 { 00:38:14.545 "results": [ 00:38:14.545 { 00:38:14.545 "job": "Nvme0n1", 00:38:14.545 "core_mask": "0x2", 00:38:14.545 "workload": "randwrite", 00:38:14.545 "status": "finished", 00:38:14.545 "queue_depth": 128, 00:38:14.545 "io_size": 4096, 00:38:14.545 "runtime": 10.005298, 00:38:14.545 "iops": 15019.642593354041, 00:38:14.545 "mibps": 58.670478880289224, 00:38:14.545 "io_failed": 0, 00:38:14.545 "io_timeout": 0, 00:38:14.545 "avg_latency_us": 8517.520234156806, 00:38:14.545 "min_latency_us": 4490.42962962963, 00:38:14.545 "max_latency_us": 22622.056296296298 00:38:14.545 } 00:38:14.545 ], 00:38:14.545 "core_count": 1 00:38:14.545 } 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 437089 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 437089 ']' 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 437089 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437089 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437089' 00:38:14.545 killing process with pid 437089 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 437089 00:38:14.545 Received shutdown signal, test time was about 10.000000 seconds 00:38:14.545 00:38:14.545 Latency(us) 00:38:14.545 [2024-12-07T00:05:30.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.545 [2024-12-07T00:05:30.696Z] =================================================================================================================== 00:38:14.545 [2024-12-07T00:05:30.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:14.545 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 437089 00:38:14.803 01:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:15.061 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:15.629 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:15.629 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:15.629 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:15.629 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:15.629 01:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:15.888 [2024-12-07 01:05:32.029369] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:16.147 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:16.147 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:16.148 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:16.407 request: 00:38:16.407 { 00:38:16.407 "uuid": "d051584f-48ce-43c2-83ff-1a699fe29724", 00:38:16.407 "method": "bdev_lvol_get_lvstores", 00:38:16.407 "req_id": 1 00:38:16.407 } 00:38:16.407 Got JSON-RPC error response 00:38:16.407 response: 00:38:16.407 { 00:38:16.407 "code": -19, 00:38:16.407 "message": "No such device" 00:38:16.407 } 00:38:16.407 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:16.407 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:16.407 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:16.407 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:16.407 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:16.666 aio_bdev 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 658737db-76e7-4785-ae0b-bbf3f69f0075 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=658737db-76e7-4785-ae0b-bbf3f69f0075 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:16.666 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:16.924 01:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 658737db-76e7-4785-ae0b-bbf3f69f0075 -t 2000 00:38:17.183 [ 00:38:17.183 { 00:38:17.183 "name": "658737db-76e7-4785-ae0b-bbf3f69f0075", 00:38:17.183 "aliases": [ 00:38:17.183 "lvs/lvol" 00:38:17.183 ], 00:38:17.183 "product_name": "Logical Volume", 00:38:17.183 "block_size": 4096, 00:38:17.183 "num_blocks": 38912, 00:38:17.183 "uuid": "658737db-76e7-4785-ae0b-bbf3f69f0075", 00:38:17.183 "assigned_rate_limits": { 00:38:17.183 "rw_ios_per_sec": 0, 00:38:17.183 "rw_mbytes_per_sec": 0, 00:38:17.183 "r_mbytes_per_sec": 0, 00:38:17.183 "w_mbytes_per_sec": 0 00:38:17.183 }, 00:38:17.183 "claimed": false, 00:38:17.183 "zoned": false, 00:38:17.183 "supported_io_types": { 00:38:17.183 "read": true, 00:38:17.183 "write": true, 00:38:17.183 "unmap": true, 00:38:17.183 "flush": false, 00:38:17.183 "reset": true, 00:38:17.183 "nvme_admin": false, 00:38:17.183 "nvme_io": false, 00:38:17.183 "nvme_io_md": false, 00:38:17.183 "write_zeroes": true, 00:38:17.183 "zcopy": false, 00:38:17.183 "get_zone_info": false, 00:38:17.183 "zone_management": false, 00:38:17.183 "zone_append": false, 00:38:17.183 "compare": false, 00:38:17.183 "compare_and_write": false, 00:38:17.183 "abort": false, 00:38:17.183 "seek_hole": true, 00:38:17.183 "seek_data": true, 00:38:17.183 "copy": false, 00:38:17.183 "nvme_iov_md": false 00:38:17.183 }, 00:38:17.183 "driver_specific": { 00:38:17.183 "lvol": { 00:38:17.183 "lvol_store_uuid": "d051584f-48ce-43c2-83ff-1a699fe29724", 00:38:17.183 "base_bdev": "aio_bdev", 00:38:17.183 "thin_provision": false, 00:38:17.183 "num_allocated_clusters": 38, 00:38:17.183 "snapshot": false, 00:38:17.183 "clone": false, 00:38:17.183 "esnap_clone": false 00:38:17.183 } 00:38:17.183 } 00:38:17.183 } 00:38:17.183 ] 00:38:17.183 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:17.183 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:17.183 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:17.441 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:17.441 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:17.441 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:17.700 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:17.700 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 658737db-76e7-4785-ae0b-bbf3f69f0075 00:38:17.958 01:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d051584f-48ce-43c2-83ff-1a699fe29724 00:38:18.216 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.475 00:38:18.475 real 0m17.783s 00:38:18.475 user 0m17.352s 00:38:18.475 sys 0m1.894s 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:18.475 ************************************ 00:38:18.475 END TEST lvs_grow_clean 00:38:18.475 ************************************ 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.475 ************************************ 00:38:18.475 START TEST lvs_grow_dirty 00:38:18.475 ************************************ 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.475 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:19.044 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:19.044 01:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:19.044 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=20002be0-639d-4e25-90d7-f23870100620 00:38:19.044 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:19.044 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 20002be0-639d-4e25-90d7-f23870100620 lvol 150 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:19.608 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:19.865 [2024-12-07 01:05:35.981353] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:19.865 [2024-12-07 01:05:35.981464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:19.865 true 00:38:19.865 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:19.865 01:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:20.430 01:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:20.430 01:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:20.430 01:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:20.688 01:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:20.946 [2024-12-07 01:05:37.085599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:21.203 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=439243 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 439243 /var/tmp/bdevperf.sock 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 439243 ']' 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.462 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:21.462 [2024-12-07 01:05:37.425334] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:21.462 [2024-12-07 01:05:37.425413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439243 ] 00:38:21.462 [2024-12-07 01:05:37.494199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.462 [2024-12-07 01:05:37.546818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.721 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.721 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:21.721 01:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:22.287 Nvme0n1 00:38:22.287 01:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:22.546 [ 00:38:22.546 { 00:38:22.546 "name": "Nvme0n1", 00:38:22.546 "aliases": [ 00:38:22.546 "469971bd-3e2b-4fe8-a991-081987cda1fe" 00:38:22.546 ], 00:38:22.546 "product_name": "NVMe disk", 00:38:22.546 "block_size": 4096, 00:38:22.546 "num_blocks": 38912, 00:38:22.546 "uuid": "469971bd-3e2b-4fe8-a991-081987cda1fe", 00:38:22.546 "numa_id": 0, 00:38:22.546 "assigned_rate_limits": { 00:38:22.546 "rw_ios_per_sec": 0, 00:38:22.546 "rw_mbytes_per_sec": 0, 00:38:22.546 "r_mbytes_per_sec": 0, 00:38:22.546 "w_mbytes_per_sec": 0 00:38:22.546 }, 00:38:22.546 "claimed": false, 00:38:22.546 "zoned": false, 00:38:22.546 "supported_io_types": { 00:38:22.546 "read": true, 00:38:22.546 "write": true, 00:38:22.546 "unmap": true, 00:38:22.546 "flush": true, 00:38:22.546 "reset": true, 00:38:22.546 "nvme_admin": true, 00:38:22.546 "nvme_io": true, 00:38:22.546 "nvme_io_md": false, 00:38:22.546 "write_zeroes": true, 00:38:22.546 "zcopy": false, 00:38:22.546 "get_zone_info": false, 00:38:22.546 "zone_management": false, 00:38:22.546 "zone_append": false, 00:38:22.546 "compare": true, 00:38:22.546 "compare_and_write": true, 00:38:22.546 "abort": true, 00:38:22.546 "seek_hole": false, 00:38:22.546 "seek_data": false, 00:38:22.546 "copy": true, 00:38:22.546 "nvme_iov_md": false 00:38:22.546 }, 00:38:22.546 "memory_domains": [ 00:38:22.546 { 00:38:22.546 "dma_device_id": "system", 00:38:22.546 "dma_device_type": 1 00:38:22.546 } 00:38:22.546 ], 00:38:22.546 "driver_specific": { 00:38:22.546 "nvme": [ 00:38:22.546 { 00:38:22.546 "trid": { 00:38:22.546 "trtype": "TCP", 00:38:22.546 "adrfam": "IPv4", 00:38:22.546 "traddr": "10.0.0.2", 00:38:22.546 "trsvcid": "4420", 00:38:22.546 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:22.546 }, 00:38:22.546 "ctrlr_data": { 00:38:22.546 "cntlid": 1, 00:38:22.546 "vendor_id": "0x8086", 00:38:22.546 "model_number": "SPDK bdev Controller", 00:38:22.546 "serial_number": "SPDK0", 00:38:22.546 "firmware_revision": "25.01", 00:38:22.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:22.546 "oacs": { 00:38:22.546 "security": 0, 00:38:22.546 "format": 0, 00:38:22.546 "firmware": 0, 00:38:22.546 "ns_manage": 0 00:38:22.546 }, 00:38:22.546 "multi_ctrlr": true, 00:38:22.546 "ana_reporting": false 00:38:22.546 }, 00:38:22.546 "vs": { 00:38:22.546 "nvme_version": "1.3" 00:38:22.546 }, 00:38:22.546 "ns_data": { 00:38:22.546 "id": 1, 00:38:22.546 "can_share": true 00:38:22.546 } 00:38:22.546 } 00:38:22.546 ], 00:38:22.546 "mp_policy": "active_passive" 00:38:22.546 } 00:38:22.546 } 00:38:22.546 ] 00:38:22.546 01:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=439340 00:38:22.546 01:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:22.546 01:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:22.546 Running I/O for 10 seconds... 00:38:23.480 Latency(us) 00:38:23.480 [2024-12-07T00:05:39.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.480 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:23.480 [2024-12-07T00:05:39.631Z] =================================================================================================================== 00:38:23.480 [2024-12-07T00:05:39.631Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:38:23.480 00:38:24.415 01:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 20002be0-639d-4e25-90d7-f23870100620 00:38:24.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.673 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:38:24.673 [2024-12-07T00:05:40.824Z] =================================================================================================================== 00:38:24.673 [2024-12-07T00:05:40.824Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:38:24.673 00:38:24.673 true 00:38:24.673 01:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:24.673 01:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:25.239 01:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:25.239 01:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:25.239 01:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 439340 00:38:25.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.497 Nvme0n1 : 3.00 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:38:25.497 [2024-12-07T00:05:41.648Z] =================================================================================================================== 00:38:25.497 [2024-12-07T00:05:41.648Z] Total : 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:38:25.497 00:38:26.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.870 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:38:26.870 [2024-12-07T00:05:43.021Z] =================================================================================================================== 00:38:26.870 [2024-12-07T00:05:43.021Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:38:26.870 00:38:27.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.492 Nvme0n1 : 5.00 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:38:27.492 [2024-12-07T00:05:43.643Z] =================================================================================================================== 00:38:27.492 [2024-12-07T00:05:43.643Z] Total : 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:38:27.492 00:38:28.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.865 Nvme0n1 : 6.00 15356.50 59.99 0.00 0.00 0.00 0.00 0.00 00:38:28.865 [2024-12-07T00:05:45.016Z] =================================================================================================================== 00:38:28.865 [2024-12-07T00:05:45.016Z] Total : 15356.50 59.99 0.00 0.00 0.00 0.00 0.00 00:38:28.865 00:38:29.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.794 Nvme0n1 : 7.00 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:38:29.794 [2024-12-07T00:05:45.945Z] =================================================================================================================== 00:38:29.794 [2024-12-07T00:05:45.945Z] Total : 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:38:29.794 00:38:30.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.726 Nvme0n1 : 8.00 15446.38 60.34 0.00 0.00 0.00 0.00 0.00 00:38:30.726 [2024-12-07T00:05:46.877Z] =================================================================================================================== 00:38:30.726 [2024-12-07T00:05:46.877Z] Total : 15446.38 60.34 0.00 0.00 0.00 0.00 0.00 00:38:30.726 00:38:31.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.660 Nvme0n1 : 9.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:38:31.660 [2024-12-07T00:05:47.811Z] =================================================================================================================== 00:38:31.660 [2024-12-07T00:05:47.811Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:38:31.660 00:38:32.595 00:38:32.595 Latency(us) 00:38:32.595 [2024-12-07T00:05:48.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.596 Nvme0n1 : 10.00 15528.25 60.66 0.00 0.00 8238.27 6310.87 17864.63 00:38:32.596 [2024-12-07T00:05:48.747Z] =================================================================================================================== 00:38:32.596 [2024-12-07T00:05:48.747Z] Total : 15528.25 60.66 0.00 0.00 8238.27 6310.87 17864.63 00:38:32.596 { 00:38:32.596 "results": [ 00:38:32.596 { 00:38:32.596 "job": "Nvme0n1", 00:38:32.596 "core_mask": "0x2", 00:38:32.596 "workload": "randwrite", 00:38:32.596 "status": "finished", 00:38:32.596 "queue_depth": 128, 00:38:32.596 "io_size": 4096, 00:38:32.596 "runtime": 10.002541, 00:38:32.596 "iops": 15528.254270589843, 00:38:32.596 "mibps": 60.657243244491575, 00:38:32.596 "io_failed": 0, 00:38:32.596 "io_timeout": 0, 00:38:32.596 "avg_latency_us": 8238.265009817122, 00:38:32.596 "min_latency_us": 6310.874074074074, 00:38:32.596 "max_latency_us": 17864.62814814815 00:38:32.596 } 00:38:32.596 ], 00:38:32.596 "core_count": 1 00:38:32.596 } 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 439243 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 439243 ']' 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 439243 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439243 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439243' 00:38:32.596 killing process with pid 439243 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 439243 00:38:32.596 Received shutdown signal, test time was about 10.000000 seconds 00:38:32.596 00:38:32.596 Latency(us) 00:38:32.596 [2024-12-07T00:05:48.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.596 [2024-12-07T00:05:48.747Z] =================================================================================================================== 00:38:32.596 [2024-12-07T00:05:48.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:32.596 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 439243 00:38:32.854 01:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:33.113 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:33.372 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:33.372 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 436654 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 436654 00:38:33.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 436654 Killed "${NVMF_APP[@]}" "$@" 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=440580 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 440580 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 440580 ']' 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.631 01:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:33.891 [2024-12-07 01:05:49.810183] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.891 [2024-12-07 01:05:49.811315] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:33.891 [2024-12-07 01:05:49.811381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.891 [2024-12-07 01:05:49.888201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.891 [2024-12-07 01:05:49.932927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.891 [2024-12-07 01:05:49.933007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.891 [2024-12-07 01:05:49.933024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.891 [2024-12-07 01:05:49.933035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.891 [2024-12-07 01:05:49.933058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.891 [2024-12-07 01:05:49.933618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.891 [2024-12-07 01:05:50.018060] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:33.891 [2024-12-07 01:05:50.018469] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.150 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:34.409 [2024-12-07 01:05:50.364402] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:34.409 [2024-12-07 01:05:50.364553] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:34.409 [2024-12-07 01:05:50.364604] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:34.409 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:34.668 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 469971bd-3e2b-4fe8-a991-081987cda1fe -t 2000 00:38:34.928 [ 00:38:34.928 { 00:38:34.928 "name": "469971bd-3e2b-4fe8-a991-081987cda1fe", 00:38:34.928 "aliases": [ 00:38:34.929 "lvs/lvol" 00:38:34.929 ], 00:38:34.929 "product_name": "Logical Volume", 00:38:34.929 "block_size": 4096, 00:38:34.929 "num_blocks": 38912, 00:38:34.929 "uuid": "469971bd-3e2b-4fe8-a991-081987cda1fe", 00:38:34.929 "assigned_rate_limits": { 00:38:34.929 "rw_ios_per_sec": 0, 00:38:34.929 "rw_mbytes_per_sec": 0, 00:38:34.929 "r_mbytes_per_sec": 0, 00:38:34.929 "w_mbytes_per_sec": 0 00:38:34.929 }, 00:38:34.929 "claimed": false, 00:38:34.929 "zoned": false, 00:38:34.929 "supported_io_types": { 00:38:34.929 "read": true, 00:38:34.929 "write": true, 00:38:34.929 "unmap": true, 00:38:34.929 "flush": false, 00:38:34.929 "reset": true, 00:38:34.929 "nvme_admin": false, 00:38:34.929 "nvme_io": false, 00:38:34.929 "nvme_io_md": false, 00:38:34.929 "write_zeroes": true, 00:38:34.929 "zcopy": false, 00:38:34.929 "get_zone_info": false, 00:38:34.929 "zone_management": false, 00:38:34.929 "zone_append": false, 00:38:34.929 "compare": false, 00:38:34.929 "compare_and_write": false, 00:38:34.929 "abort": false, 00:38:34.929 "seek_hole": true, 00:38:34.929 "seek_data": true, 00:38:34.929 "copy": false, 00:38:34.929 "nvme_iov_md": false 00:38:34.929 }, 00:38:34.929 "driver_specific": { 00:38:34.929 "lvol": { 00:38:34.929 "lvol_store_uuid": "20002be0-639d-4e25-90d7-f23870100620", 00:38:34.929 "base_bdev": "aio_bdev", 00:38:34.929 "thin_provision": false, 00:38:34.929 "num_allocated_clusters": 38, 00:38:34.929 "snapshot": false, 00:38:34.929 "clone": false, 00:38:34.929 "esnap_clone": false 00:38:34.929 } 00:38:34.929 } 00:38:34.929 } 00:38:34.929 ] 00:38:34.929 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:34.929 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:34.929 01:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:35.189 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:35.189 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:35.189 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:35.448 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:35.448 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:35.709 [2024-12-07 01:05:51.762137] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:35.709 01:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:35.968 request: 00:38:35.968 { 00:38:35.968 "uuid": "20002be0-639d-4e25-90d7-f23870100620", 00:38:35.968 "method": "bdev_lvol_get_lvstores", 00:38:35.968 "req_id": 1 00:38:35.968 } 00:38:35.968 Got JSON-RPC error response 00:38:35.968 response: 00:38:35.968 { 00:38:35.968 "code": -19, 00:38:35.968 "message": "No such device" 00:38:35.968 } 00:38:35.968 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:35.968 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.968 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.968 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.968 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:36.229 aio_bdev 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:36.229 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:36.490 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 469971bd-3e2b-4fe8-a991-081987cda1fe -t 2000 00:38:36.749 [ 00:38:36.749 { 00:38:36.749 "name": "469971bd-3e2b-4fe8-a991-081987cda1fe", 00:38:36.749 "aliases": [ 00:38:36.749 "lvs/lvol" 00:38:36.749 ], 00:38:36.749 "product_name": "Logical Volume", 00:38:36.749 "block_size": 4096, 00:38:36.749 "num_blocks": 38912, 00:38:36.749 "uuid": "469971bd-3e2b-4fe8-a991-081987cda1fe", 00:38:36.749 "assigned_rate_limits": { 00:38:36.749 "rw_ios_per_sec": 0, 00:38:36.749 "rw_mbytes_per_sec": 0, 00:38:36.749 "r_mbytes_per_sec": 0, 00:38:36.749 "w_mbytes_per_sec": 0 00:38:36.749 }, 00:38:36.749 "claimed": false, 00:38:36.749 "zoned": false, 00:38:36.749 "supported_io_types": { 00:38:36.749 "read": true, 00:38:36.749 "write": true, 00:38:36.749 "unmap": true, 00:38:36.749 "flush": false, 00:38:36.749 "reset": true, 00:38:36.749 "nvme_admin": false, 00:38:36.749 "nvme_io": false, 00:38:36.749 "nvme_io_md": false, 00:38:36.749 "write_zeroes": true, 00:38:36.749 "zcopy": false, 00:38:36.749 "get_zone_info": false, 00:38:36.749 "zone_management": false, 00:38:36.749 "zone_append": false, 00:38:36.749 "compare": false, 00:38:36.749 "compare_and_write": false, 00:38:36.749 "abort": false, 00:38:36.749 "seek_hole": true, 00:38:36.749 "seek_data": true, 00:38:36.749 "copy": false, 00:38:36.749 "nvme_iov_md": false 00:38:36.749 }, 00:38:36.749 "driver_specific": { 00:38:36.749 "lvol": { 00:38:36.749 "lvol_store_uuid": "20002be0-639d-4e25-90d7-f23870100620", 00:38:36.749 "base_bdev": "aio_bdev", 00:38:36.749 "thin_provision": false, 00:38:36.749 "num_allocated_clusters": 38, 00:38:36.749 "snapshot": false, 00:38:36.749 "clone": false, 00:38:36.749 "esnap_clone": false 00:38:36.749 } 00:38:36.749 } 00:38:36.749 } 00:38:36.749 ] 00:38:37.007 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:37.007 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:37.007 01:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:37.266 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:37.266 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20002be0-639d-4e25-90d7-f23870100620 00:38:37.266 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:37.524 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:37.524 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 469971bd-3e2b-4fe8-a991-081987cda1fe 00:38:37.783 01:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20002be0-639d-4e25-90d7-f23870100620 00:38:38.042 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:38.303 00:38:38.303 real 0m19.712s 00:38:38.303 user 0m36.715s 00:38:38.303 sys 0m4.729s 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:38.303 ************************************ 00:38:38.303 END TEST lvs_grow_dirty 00:38:38.303 ************************************ 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:38.303 nvmf_trace.0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.303 rmmod nvme_tcp 00:38:38.303 rmmod nvme_fabrics 00:38:38.303 rmmod nvme_keyring 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 440580 ']' 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 440580 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 440580 ']' 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 440580 00:38:38.303 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440580 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440580' 00:38:38.564 killing process with pid 440580 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 440580 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 440580 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.564 01:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.099 00:38:41.099 real 0m43.055s 00:38:41.099 user 0m55.763s 00:38:41.099 sys 0m8.773s 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:41.099 ************************************ 00:38:41.099 END TEST nvmf_lvs_grow 00:38:41.099 ************************************ 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.099 ************************************ 00:38:41.099 START TEST nvmf_bdev_io_wait 00:38:41.099 ************************************ 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:41.099 * Looking for test storage... 00:38:41.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:41.099 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.100 --rc genhtml_branch_coverage=1 00:38:41.100 --rc genhtml_function_coverage=1 00:38:41.100 --rc genhtml_legend=1 00:38:41.100 --rc geninfo_all_blocks=1 00:38:41.100 --rc geninfo_unexecuted_blocks=1 00:38:41.100 00:38:41.100 ' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.100 --rc genhtml_branch_coverage=1 00:38:41.100 --rc genhtml_function_coverage=1 00:38:41.100 --rc genhtml_legend=1 00:38:41.100 --rc geninfo_all_blocks=1 00:38:41.100 --rc geninfo_unexecuted_blocks=1 00:38:41.100 00:38:41.100 ' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.100 --rc genhtml_branch_coverage=1 00:38:41.100 --rc genhtml_function_coverage=1 00:38:41.100 --rc genhtml_legend=1 00:38:41.100 --rc geninfo_all_blocks=1 00:38:41.100 --rc geninfo_unexecuted_blocks=1 00:38:41.100 00:38:41.100 ' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.100 --rc genhtml_branch_coverage=1 00:38:41.100 --rc genhtml_function_coverage=1 00:38:41.100 --rc genhtml_legend=1 00:38:41.100 --rc geninfo_all_blocks=1 00:38:41.100 --rc geninfo_unexecuted_blocks=1 00:38:41.100 00:38:41.100 ' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.100 01:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:43.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:43.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.005 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:43.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:43.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:43.006 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:43.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:43.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:38:43.265 00:38:43.265 --- 10.0.0.2 ping statistics --- 00:38:43.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.265 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:43.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:43.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:38:43.265 00:38:43.265 --- 10.0.0.1 ping statistics --- 00:38:43.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.265 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:43.265 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=443219 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 443219 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 443219 ']' 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:43.266 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.266 [2024-12-07 01:05:59.316704] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:43.266 [2024-12-07 01:05:59.317807] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:43.266 [2024-12-07 01:05:59.317877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.266 [2024-12-07 01:05:59.393232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:43.538 [2024-12-07 01:05:59.443657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:43.539 [2024-12-07 01:05:59.443712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:43.539 [2024-12-07 01:05:59.443740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:43.539 [2024-12-07 01:05:59.443752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:43.539 [2024-12-07 01:05:59.443762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:43.539 [2024-12-07 01:05:59.445414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:43.539 [2024-12-07 01:05:59.445479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:43.539 [2024-12-07 01:05:59.445549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:43.539 [2024-12-07 01:05:59.445552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.539 [2024-12-07 01:05:59.446034] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.539 [2024-12-07 01:05:59.647088] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:43.539 [2024-12-07 01:05:59.647313] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:43.539 [2024-12-07 01:05:59.648215] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:43.539 [2024-12-07 01:05:59.648964] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.539 [2024-12-07 01:05:59.654266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.539 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.798 Malloc0 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.798 [2024-12-07 01:05:59.710431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=443251 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=443253 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.798 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.799 { 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme$subsystem", 00:38:43.799 "trtype": "$TEST_TRANSPORT", 00:38:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "$NVMF_PORT", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.799 "hdgst": ${hdgst:-false}, 00:38:43.799 "ddgst": ${ddgst:-false} 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 } 00:38:43.799 EOF 00:38:43.799 )") 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=443255 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.799 { 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme$subsystem", 00:38:43.799 "trtype": "$TEST_TRANSPORT", 00:38:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "$NVMF_PORT", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.799 "hdgst": ${hdgst:-false}, 00:38:43.799 "ddgst": ${ddgst:-false} 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 } 00:38:43.799 EOF 00:38:43.799 )") 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=443258 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.799 { 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme$subsystem", 00:38:43.799 "trtype": "$TEST_TRANSPORT", 00:38:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "$NVMF_PORT", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.799 "hdgst": ${hdgst:-false}, 00:38:43.799 "ddgst": ${ddgst:-false} 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 } 00:38:43.799 EOF 00:38:43.799 )") 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.799 { 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme$subsystem", 00:38:43.799 "trtype": "$TEST_TRANSPORT", 00:38:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "$NVMF_PORT", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.799 "hdgst": ${hdgst:-false}, 00:38:43.799 "ddgst": ${ddgst:-false} 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 } 00:38:43.799 EOF 00:38:43.799 )") 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 443251 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme1", 00:38:43.799 "trtype": "tcp", 00:38:43.799 "traddr": "10.0.0.2", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "4420", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.799 "hdgst": false, 00:38:43.799 "ddgst": false 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 }' 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme1", 00:38:43.799 "trtype": "tcp", 00:38:43.799 "traddr": "10.0.0.2", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "4420", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.799 "hdgst": false, 00:38:43.799 "ddgst": false 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 }' 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme1", 00:38:43.799 "trtype": "tcp", 00:38:43.799 "traddr": "10.0.0.2", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "4420", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.799 "hdgst": false, 00:38:43.799 "ddgst": false 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 }' 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:43.799 01:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.799 "params": { 00:38:43.799 "name": "Nvme1", 00:38:43.799 "trtype": "tcp", 00:38:43.799 "traddr": "10.0.0.2", 00:38:43.799 "adrfam": "ipv4", 00:38:43.799 "trsvcid": "4420", 00:38:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.799 "hdgst": false, 00:38:43.799 "ddgst": false 00:38:43.799 }, 00:38:43.799 "method": "bdev_nvme_attach_controller" 00:38:43.799 }' 00:38:43.799 [2024-12-07 01:05:59.760895] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:43.799 [2024-12-07 01:05:59.760895] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:43.799 [2024-12-07 01:05:59.761007] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 01:05:59.761019] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:43.799 --proc-type=auto ] 00:38:43.799 [2024-12-07 01:05:59.761326] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:43.799 [2024-12-07 01:05:59.761327] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:43.799 [2024-12-07 01:05:59.761401] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-07 01:05:59.761401] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:43.799 --proc-type=auto ] 00:38:43.799 [2024-12-07 01:05:59.946785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.059 [2024-12-07 01:05:59.988991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:44.059 [2024-12-07 01:06:00.070162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.059 [2024-12-07 01:06:00.114279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:44.059 [2024-12-07 01:06:00.125590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.059 [2024-12-07 01:06:00.163820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:44.059 [2024-12-07 01:06:00.194975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.318 [2024-12-07 01:06:00.233940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:44.318 Running I/O for 1 seconds... 00:38:44.318 Running I/O for 1 seconds... 00:38:44.318 Running I/O for 1 seconds... 00:38:44.318 Running I/O for 1 seconds... 00:38:45.255 150064.00 IOPS, 586.19 MiB/s 00:38:45.255 Latency(us) 00:38:45.255 [2024-12-07T00:06:01.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.255 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:45.255 Nvme1n1 : 1.00 149737.04 584.91 0.00 0.00 850.01 362.57 2148.12 00:38:45.255 [2024-12-07T00:06:01.406Z] =================================================================================================================== 00:38:45.255 [2024-12-07T00:06:01.406Z] Total : 149737.04 584.91 0.00 0.00 850.01 362.57 2148.12 00:38:45.255 8884.00 IOPS, 34.70 MiB/s 00:38:45.255 Latency(us) 00:38:45.255 [2024-12-07T00:06:01.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.255 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:45.255 Nvme1n1 : 1.01 8943.13 34.93 0.00 0.00 14248.97 4587.52 16214.09 00:38:45.255 [2024-12-07T00:06:01.406Z] =================================================================================================================== 00:38:45.255 [2024-12-07T00:06:01.406Z] Total : 8943.13 34.93 0.00 0.00 14248.97 4587.52 16214.09 00:38:45.255 8690.00 IOPS, 33.95 MiB/s [2024-12-07T00:06:01.664Z] 8926.00 IOPS, 34.87 MiB/s 00:38:45.513 Latency(us) 00:38:45.513 [2024-12-07T00:06:01.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.513 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:45.513 Nvme1n1 : 1.01 8739.33 34.14 0.00 0.00 14573.35 4903.06 19903.53 00:38:45.513 [2024-12-07T00:06:01.664Z] =================================================================================================================== 00:38:45.513 [2024-12-07T00:06:01.664Z] Total : 8739.33 34.14 0.00 0.00 14573.35 4903.06 19903.53 00:38:45.513 00:38:45.513 Latency(us) 00:38:45.513 [2024-12-07T00:06:01.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.513 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:45.513 Nvme1n1 : 1.01 9005.57 35.18 0.00 0.00 14165.89 2536.49 20874.43 00:38:45.513 [2024-12-07T00:06:01.664Z] =================================================================================================================== 00:38:45.513 [2024-12-07T00:06:01.664Z] Total : 9005.57 35.18 0.00 0.00 14165.89 2536.49 20874.43 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 443253 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 443255 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 443258 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.513 rmmod nvme_tcp 00:38:45.513 rmmod nvme_fabrics 00:38:45.513 rmmod nvme_keyring 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 443219 ']' 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 443219 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 443219 ']' 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 443219 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.513 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443219 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443219' 00:38:45.773 killing process with pid 443219 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 443219 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 443219 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.773 01:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.311 00:38:48.311 real 0m7.131s 00:38:48.311 user 0m13.183s 00:38:48.311 sys 0m4.210s 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.311 ************************************ 00:38:48.311 END TEST nvmf_bdev_io_wait 00:38:48.311 ************************************ 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:48.311 ************************************ 00:38:48.311 START TEST nvmf_queue_depth 00:38:48.311 ************************************ 00:38:48.311 01:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:48.311 * Looking for test storage... 00:38:48.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:48.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.311 --rc genhtml_branch_coverage=1 00:38:48.311 --rc genhtml_function_coverage=1 00:38:48.311 --rc genhtml_legend=1 00:38:48.311 --rc geninfo_all_blocks=1 00:38:48.311 --rc geninfo_unexecuted_blocks=1 00:38:48.311 00:38:48.311 ' 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:48.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.311 --rc genhtml_branch_coverage=1 00:38:48.311 --rc genhtml_function_coverage=1 00:38:48.311 --rc genhtml_legend=1 00:38:48.311 --rc geninfo_all_blocks=1 00:38:48.311 --rc geninfo_unexecuted_blocks=1 00:38:48.311 00:38:48.311 ' 00:38:48.311 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:48.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.311 --rc genhtml_branch_coverage=1 00:38:48.311 --rc genhtml_function_coverage=1 00:38:48.312 --rc genhtml_legend=1 00:38:48.312 --rc geninfo_all_blocks=1 00:38:48.312 --rc geninfo_unexecuted_blocks=1 00:38:48.312 00:38:48.312 ' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:48.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.312 --rc genhtml_branch_coverage=1 00:38:48.312 --rc genhtml_function_coverage=1 00:38:48.312 --rc genhtml_legend=1 00:38:48.312 --rc geninfo_all_blocks=1 00:38:48.312 --rc geninfo_unexecuted_blocks=1 00:38:48.312 00:38:48.312 ' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.312 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:48.313 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:48.313 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:48.313 01:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:50.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.215 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:50.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:50.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:50.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:50.216 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:50.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:50.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:38:50.475 00:38:50.475 --- 10.0.0.2 ping statistics --- 00:38:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.475 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:50.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:50.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:38:50.475 00:38:50.475 --- 10.0.0.1 ping statistics --- 00:38:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.475 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=445580 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 445580 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 445580 ']' 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.475 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.475 [2024-12-07 01:06:06.510253] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:50.475 [2024-12-07 01:06:06.511284] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:50.475 [2024-12-07 01:06:06.511362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:50.475 [2024-12-07 01:06:06.588222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.734 [2024-12-07 01:06:06.634892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.734 [2024-12-07 01:06:06.634941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.734 [2024-12-07 01:06:06.634965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.734 [2024-12-07 01:06:06.634976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.734 [2024-12-07 01:06:06.634985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.734 [2024-12-07 01:06:06.635536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.734 [2024-12-07 01:06:06.721896] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:50.734 [2024-12-07 01:06:06.722209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 [2024-12-07 01:06:06.772109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 Malloc0 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 [2024-12-07 01:06:06.836200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=445607 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 445607 /var/tmp/bdevperf.sock 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 445607 ']' 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.734 01:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:50.734 [2024-12-07 01:06:06.882284] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:50.734 [2024-12-07 01:06:06.882383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445607 ] 00:38:50.993 [2024-12-07 01:06:06.950646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.993 [2024-12-07 01:06:07.000614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.993 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.993 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:50.993 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:50.993 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.993 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:51.251 NVMe0n1 00:38:51.251 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.251 01:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:51.251 Running I/O for 10 seconds... 00:38:53.551 8192.00 IOPS, 32.00 MiB/s [2024-12-07T00:06:10.637Z] 8306.50 IOPS, 32.45 MiB/s [2024-12-07T00:06:11.574Z] 8489.33 IOPS, 33.16 MiB/s [2024-12-07T00:06:12.520Z] 8449.75 IOPS, 33.01 MiB/s [2024-12-07T00:06:13.454Z] 8578.40 IOPS, 33.51 MiB/s [2024-12-07T00:06:14.389Z] 8560.67 IOPS, 33.44 MiB/s [2024-12-07T00:06:15.763Z] 8629.71 IOPS, 33.71 MiB/s [2024-12-07T00:06:16.697Z] 8651.88 IOPS, 33.80 MiB/s [2024-12-07T00:06:17.632Z] 8647.44 IOPS, 33.78 MiB/s [2024-12-07T00:06:17.632Z] 8686.70 IOPS, 33.93 MiB/s 00:39:01.481 Latency(us) 00:39:01.481 [2024-12-07T00:06:17.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:01.481 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:01.481 Verification LBA range: start 0x0 length 0x4000 00:39:01.481 NVMe0n1 : 10.10 8700.26 33.99 0.00 0.00 117165.14 22913.33 71846.87 00:39:01.481 [2024-12-07T00:06:17.632Z] =================================================================================================================== 00:39:01.481 [2024-12-07T00:06:17.632Z] Total : 8700.26 33.99 0.00 0.00 117165.14 22913.33 71846.87 00:39:01.481 { 00:39:01.481 "results": [ 00:39:01.481 { 00:39:01.481 "job": "NVMe0n1", 00:39:01.481 "core_mask": "0x1", 00:39:01.481 "workload": "verify", 00:39:01.481 "status": "finished", 00:39:01.481 "verify_range": { 00:39:01.481 "start": 0, 00:39:01.481 "length": 16384 00:39:01.481 }, 00:39:01.481 "queue_depth": 1024, 00:39:01.481 "io_size": 4096, 00:39:01.481 "runtime": 10.096136, 00:39:01.481 "iops": 8700.25918826767, 00:39:01.481 "mibps": 33.985387454170585, 00:39:01.481 "io_failed": 0, 00:39:01.481 "io_timeout": 0, 00:39:01.481 "avg_latency_us": 117165.14160761291, 00:39:01.481 "min_latency_us": 22913.327407407407, 00:39:01.481 "max_latency_us": 71846.87407407408 00:39:01.481 } 00:39:01.481 ], 00:39:01.481 "core_count": 1 00:39:01.481 } 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 445607 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 445607 ']' 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 445607 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445607 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445607' 00:39:01.481 killing process with pid 445607 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 445607 00:39:01.481 Received shutdown signal, test time was about 10.000000 seconds 00:39:01.481 00:39:01.481 Latency(us) 00:39:01.481 [2024-12-07T00:06:17.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:01.481 [2024-12-07T00:06:17.632Z] =================================================================================================================== 00:39:01.481 [2024-12-07T00:06:17.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:01.481 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 445607 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.739 rmmod nvme_tcp 00:39:01.739 rmmod nvme_fabrics 00:39:01.739 rmmod nvme_keyring 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 445580 ']' 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 445580 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 445580 ']' 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 445580 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445580 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:01.739 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:01.740 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445580' 00:39:01.740 killing process with pid 445580 00:39:01.740 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 445580 00:39:01.740 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 445580 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:01.999 01:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:01.999 01:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.999 01:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.999 01:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.999 01:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.999 01:06:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.907 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.907 00:39:03.907 real 0m16.072s 00:39:03.907 user 0m22.199s 00:39:03.907 sys 0m3.280s 00:39:03.908 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.908 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:03.908 ************************************ 00:39:03.908 END TEST nvmf_queue_depth 00:39:03.908 ************************************ 00:39:04.167 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:04.168 ************************************ 00:39:04.168 START TEST nvmf_target_multipath 00:39:04.168 ************************************ 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:04.168 * Looking for test storage... 00:39:04.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:04.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.168 --rc genhtml_branch_coverage=1 00:39:04.168 --rc genhtml_function_coverage=1 00:39:04.168 --rc genhtml_legend=1 00:39:04.168 --rc geninfo_all_blocks=1 00:39:04.168 --rc geninfo_unexecuted_blocks=1 00:39:04.168 00:39:04.168 ' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:04.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.168 --rc genhtml_branch_coverage=1 00:39:04.168 --rc genhtml_function_coverage=1 00:39:04.168 --rc genhtml_legend=1 00:39:04.168 --rc geninfo_all_blocks=1 00:39:04.168 --rc geninfo_unexecuted_blocks=1 00:39:04.168 00:39:04.168 ' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:04.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.168 --rc genhtml_branch_coverage=1 00:39:04.168 --rc genhtml_function_coverage=1 00:39:04.168 --rc genhtml_legend=1 00:39:04.168 --rc geninfo_all_blocks=1 00:39:04.168 --rc geninfo_unexecuted_blocks=1 00:39:04.168 00:39:04.168 ' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:04.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.168 --rc genhtml_branch_coverage=1 00:39:04.168 --rc genhtml_function_coverage=1 00:39:04.168 --rc genhtml_legend=1 00:39:04.168 --rc geninfo_all_blocks=1 00:39:04.168 --rc geninfo_unexecuted_blocks=1 00:39:04.168 00:39:04.168 ' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.168 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:04.169 01:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:06.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:06.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:06.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:06.752 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:06.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:39:06.753 00:39:06.753 --- 10.0.0.2 ping statistics --- 00:39:06.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.753 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:39:06.753 00:39:06.753 --- 10.0.0.1 ping statistics --- 00:39:06.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.753 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:06.753 only one NIC for nvmf test 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:06.753 rmmod nvme_tcp 00:39:06.753 rmmod nvme_fabrics 00:39:06.753 rmmod nvme_keyring 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.753 01:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.695 00:39:08.695 real 0m4.643s 00:39:08.695 user 0m0.937s 00:39:08.695 sys 0m1.624s 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:08.695 ************************************ 00:39:08.695 END TEST nvmf_target_multipath 00:39:08.695 ************************************ 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:08.695 ************************************ 00:39:08.695 START TEST nvmf_zcopy 00:39:08.695 ************************************ 00:39:08.695 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:08.955 * Looking for test storage... 00:39:08.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:08.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.955 --rc genhtml_branch_coverage=1 00:39:08.955 --rc genhtml_function_coverage=1 00:39:08.955 --rc genhtml_legend=1 00:39:08.955 --rc geninfo_all_blocks=1 00:39:08.955 --rc geninfo_unexecuted_blocks=1 00:39:08.955 00:39:08.955 ' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:08.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.955 --rc genhtml_branch_coverage=1 00:39:08.955 --rc genhtml_function_coverage=1 00:39:08.955 --rc genhtml_legend=1 00:39:08.955 --rc geninfo_all_blocks=1 00:39:08.955 --rc geninfo_unexecuted_blocks=1 00:39:08.955 00:39:08.955 ' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:08.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.955 --rc genhtml_branch_coverage=1 00:39:08.955 --rc genhtml_function_coverage=1 00:39:08.955 --rc genhtml_legend=1 00:39:08.955 --rc geninfo_all_blocks=1 00:39:08.955 --rc geninfo_unexecuted_blocks=1 00:39:08.955 00:39:08.955 ' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:08.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.955 --rc genhtml_branch_coverage=1 00:39:08.955 --rc genhtml_function_coverage=1 00:39:08.955 --rc genhtml_legend=1 00:39:08.955 --rc geninfo_all_blocks=1 00:39:08.955 --rc geninfo_unexecuted_blocks=1 00:39:08.955 00:39:08.955 ' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.955 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:08.956 01:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:11.485 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:11.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:11.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:11.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:11.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:11.486 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:39:11.487 00:39:11.487 --- 10.0.0.2 ping statistics --- 00:39:11.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.487 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:39:11.487 00:39:11.487 --- 10.0.0.1 ping statistics --- 00:39:11.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.487 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=451299 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 451299 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 451299 ']' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 [2024-12-07 01:06:27.273467] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:11.487 [2024-12-07 01:06:27.274586] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:11.487 [2024-12-07 01:06:27.274654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.487 [2024-12-07 01:06:27.352925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.487 [2024-12-07 01:06:27.401207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.487 [2024-12-07 01:06:27.401255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.487 [2024-12-07 01:06:27.401285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.487 [2024-12-07 01:06:27.401297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.487 [2024-12-07 01:06:27.401307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.487 [2024-12-07 01:06:27.401842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.487 [2024-12-07 01:06:27.486701] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:11.487 [2024-12-07 01:06:27.487012] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 [2024-12-07 01:06:27.538433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.487 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.488 [2024-12-07 01:06:27.554590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.488 malloc0 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.488 { 00:39:11.488 "params": { 00:39:11.488 "name": "Nvme$subsystem", 00:39:11.488 "trtype": "$TEST_TRANSPORT", 00:39:11.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.488 "adrfam": "ipv4", 00:39:11.488 "trsvcid": "$NVMF_PORT", 00:39:11.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.488 "hdgst": ${hdgst:-false}, 00:39:11.488 "ddgst": ${ddgst:-false} 00:39:11.488 }, 00:39:11.488 "method": "bdev_nvme_attach_controller" 00:39:11.488 } 00:39:11.488 EOF 00:39:11.488 )") 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:11.488 01:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.488 "params": { 00:39:11.488 "name": "Nvme1", 00:39:11.488 "trtype": "tcp", 00:39:11.488 "traddr": "10.0.0.2", 00:39:11.488 "adrfam": "ipv4", 00:39:11.488 "trsvcid": "4420", 00:39:11.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.488 "hdgst": false, 00:39:11.488 "ddgst": false 00:39:11.488 }, 00:39:11.488 "method": "bdev_nvme_attach_controller" 00:39:11.488 }' 00:39:11.748 [2024-12-07 01:06:27.639063] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:11.748 [2024-12-07 01:06:27.639142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451324 ] 00:39:11.748 [2024-12-07 01:06:27.710428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.748 [2024-12-07 01:06:27.756382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.006 Running I/O for 10 seconds... 00:39:13.876 5745.00 IOPS, 44.88 MiB/s [2024-12-07T00:06:30.962Z] 5778.00 IOPS, 45.14 MiB/s [2024-12-07T00:06:32.336Z] 5787.67 IOPS, 45.22 MiB/s [2024-12-07T00:06:33.268Z] 5801.00 IOPS, 45.32 MiB/s [2024-12-07T00:06:34.199Z] 5806.20 IOPS, 45.36 MiB/s [2024-12-07T00:06:35.130Z] 5806.17 IOPS, 45.36 MiB/s [2024-12-07T00:06:36.061Z] 5809.43 IOPS, 45.39 MiB/s [2024-12-07T00:06:36.993Z] 5813.38 IOPS, 45.42 MiB/s [2024-12-07T00:06:38.366Z] 5819.33 IOPS, 45.46 MiB/s [2024-12-07T00:06:38.366Z] 5818.40 IOPS, 45.46 MiB/s 00:39:22.215 Latency(us) 00:39:22.215 [2024-12-07T00:06:38.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.215 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:22.215 Verification LBA range: start 0x0 length 0x1000 00:39:22.216 Nvme1n1 : 10.01 5823.54 45.50 0.00 0.00 21906.99 3046.21 31845.64 00:39:22.216 [2024-12-07T00:06:38.367Z] =================================================================================================================== 00:39:22.216 [2024-12-07T00:06:38.367Z] Total : 5823.54 45.50 0.00 0.00 21906.99 3046.21 31845.64 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=452615 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:22.216 { 00:39:22.216 "params": { 00:39:22.216 "name": "Nvme$subsystem", 00:39:22.216 "trtype": "$TEST_TRANSPORT", 00:39:22.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.216 "adrfam": "ipv4", 00:39:22.216 "trsvcid": "$NVMF_PORT", 00:39:22.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.216 "hdgst": ${hdgst:-false}, 00:39:22.216 "ddgst": ${ddgst:-false} 00:39:22.216 }, 00:39:22.216 "method": "bdev_nvme_attach_controller" 00:39:22.216 } 00:39:22.216 EOF 00:39:22.216 )") 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:22.216 [2024-12-07 01:06:38.162398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.162436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:22.216 01:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:22.216 "params": { 00:39:22.216 "name": "Nvme1", 00:39:22.216 "trtype": "tcp", 00:39:22.216 "traddr": "10.0.0.2", 00:39:22.216 "adrfam": "ipv4", 00:39:22.216 "trsvcid": "4420", 00:39:22.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:22.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:22.216 "hdgst": false, 00:39:22.216 "ddgst": false 00:39:22.216 }, 00:39:22.216 "method": "bdev_nvme_attach_controller" 00:39:22.216 }' 00:39:22.216 [2024-12-07 01:06:38.170327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.170363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.178327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.178362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.186326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.186360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.194324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.194356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.202322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.202356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.204836] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:22.216 [2024-12-07 01:06:38.204906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452615 ] 00:39:22.216 [2024-12-07 01:06:38.210325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.210358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.218331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.218364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.226330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.226351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.234329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.234369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.242328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.242347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.250328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.250362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.258326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.258345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.266326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.266359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.274258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.216 [2024-12-07 01:06:38.274325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.274358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.282364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.282399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.290353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.290384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.298329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.298363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.306327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.306361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.314326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.314359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.321844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.216 [2024-12-07 01:06:38.322328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.322361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.330326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.330360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.338353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.338382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.346382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.346422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.354374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.354406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.216 [2024-12-07 01:06:38.362383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.216 [2024-12-07 01:06:38.362420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.370357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.370389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.378378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.378413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.386365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.386397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.394328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.394348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.402368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.402401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.410355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.410389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.418359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.418396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.426330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.426352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.434356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.434379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.442339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.442376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.450335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.450372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.458332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.458355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.475 [2024-12-07 01:06:38.466333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.475 [2024-12-07 01:06:38.466355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.474331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.474353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.482327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.482347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.490327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.490347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.498326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.498345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.506326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.506345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.514330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.514365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.522328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.522349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.530325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.530345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.538326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.538359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.546329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.546363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.554328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.554347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.562329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.562350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.570326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.570350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.578327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.578362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.586327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.586361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.594326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.594345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.476 [2024-12-07 01:06:38.602328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.476 [2024-12-07 01:06:38.602348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.733 [2024-12-07 01:06:38.650219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.733 [2024-12-07 01:06:38.650249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.733 [2024-12-07 01:06:38.654332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.733 [2024-12-07 01:06:38.654368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.733 [2024-12-07 01:06:38.662337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.733 [2024-12-07 01:06:38.662373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.733 Running I/O for 5 seconds... 00:39:22.733 [2024-12-07 01:06:38.678191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.733 [2024-12-07 01:06:38.678219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.689744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.689771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.702933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.702959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.712935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.712959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.727167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.727210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.736830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.736854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.751956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.751980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.761184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.761210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.777197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.777224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.792428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.792454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.802114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.802140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.816217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.816242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.825840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.825867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.838159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.838185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.849034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.849075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.862061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.862089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.734 [2024-12-07 01:06:38.871593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.734 [2024-12-07 01:06:38.871619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.887803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.887827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.906876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.906900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.916888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.916913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.931727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.931750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.941108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.941135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.956557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.956581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.974207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.974234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.983939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.983963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:38.999757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:38.999782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.009058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.009086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.023015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.023066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.032486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.032512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.044378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.044403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.061015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.061053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.075618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.075644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.085609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.085634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.097481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.097506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.111275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.111303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.120703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.120728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.992 [2024-12-07 01:06:39.135251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.992 [2024-12-07 01:06:39.135278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.145027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.145068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.159462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.159486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.168753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.168777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.184762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.184788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.199630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.199657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.209150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.209177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.223932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.223958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.233624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.233664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.245076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.245103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.260595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.260634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.277871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.277895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.290867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.290918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.300077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.300103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.312103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.312128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.327233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.327260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.336643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.336668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.350452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.350478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.359876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.359900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.375892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.375915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.251 [2024-12-07 01:06:39.392636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.251 [2024-12-07 01:06:39.392679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.409837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.409877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.419877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.419901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.435890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.435930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.445599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.445625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.457724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.457763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.468824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.468850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.484964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.485015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.499598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.499625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.509162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.509189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.523981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.524034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.540840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.540891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.556310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.556352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.565754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.565779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.577285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.577323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.588449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.588488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.602734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.602761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.612023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.612049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.623747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.623772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.639583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.639609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.510 [2024-12-07 01:06:39.649053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.510 [2024-12-07 01:06:39.649078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.663868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.663894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 11577.00 IOPS, 90.45 MiB/s [2024-12-07T00:06:39.920Z] [2024-12-07 01:06:39.673950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.673990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.685463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.685488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.697923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.697950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.712007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.712035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.721966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.722015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.733692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.733718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.746006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.746034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.769 [2024-12-07 01:06:39.755457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.769 [2024-12-07 01:06:39.755481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.767078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.767113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.777583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.777607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.791705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.791729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.800886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.800910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.814961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.815007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.824538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.824564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.840378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.840402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.857989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.858020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.867502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.867540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.879135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.879161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.890229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.890256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.900924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.900948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.770 [2024-12-07 01:06:39.916350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.770 [2024-12-07 01:06:39.916376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.934269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.934312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.944227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.944253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.959142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.959168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.968951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.968992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.983793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.983818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:39.993808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:39.993832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.005623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.005652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.016868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.016897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.030206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.030237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.039965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.040010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.051984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.052030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.068655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.068680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.084597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.084636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.100536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.100561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.110406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.110431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.122656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.122680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.133890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.029 [2024-12-07 01:06:40.133916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.029 [2024-12-07 01:06:40.148219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.030 [2024-12-07 01:06:40.148245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.030 [2024-12-07 01:06:40.158074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.030 [2024-12-07 01:06:40.158114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.030 [2024-12-07 01:06:40.169741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.030 [2024-12-07 01:06:40.169767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.181989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.182026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.191689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.191713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.206916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.206940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.216686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.216710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.231267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.231309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.240740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.240764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.255542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.255570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.264818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.264845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.278937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.278963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.289096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.289123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.303319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.303343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.313002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.313027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.327362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.327386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.337244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.337271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.352206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.352231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.369863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.369888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.379597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.379622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.395426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.395451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.405410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.405436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.419110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.419136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.288 [2024-12-07 01:06:40.428325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.288 [2024-12-07 01:06:40.428351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.440262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.440289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.456154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.456181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.465397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.465422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.479775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.479813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.489659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.489686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.501549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.501573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.514270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.514313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.523875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.523915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.535896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.535920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.552069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.552096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.561488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.561512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.575232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.575273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.584814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.584839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.598845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.598869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.607991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.608025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.623892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.623918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.633588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.633613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.645161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.645187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.660887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.660912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 11627.50 IOPS, 90.84 MiB/s [2024-12-07T00:06:40.698Z] [2024-12-07 01:06:40.675662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.675689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.547 [2024-12-07 01:06:40.685647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.547 [2024-12-07 01:06:40.685672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.697542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.697574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.712644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.712685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.722396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.722436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.733979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.734032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.744002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.744031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.755484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.755508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.766668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.766708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.777761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.777785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.791629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.791669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.801179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.801206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.815419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.815443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.825551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.825576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.837280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.837305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.848072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.848098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.863954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.864000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.873571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.873595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.887083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.887124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.896647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.896673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.912432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.912457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.922123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.922156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.933730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.933769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.806 [2024-12-07 01:06:40.944631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.806 [2024-12-07 01:06:40.944670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:40.960443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:40.960468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:40.970197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:40.970223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:40.981929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:40.981953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:40.992861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:40.992884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.008029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.008057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.017724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.017749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.029666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.029705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.042622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.042648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.051798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.051836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.063382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.063408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.073923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.073948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.084289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.084328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.100306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.100331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.118004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.118043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.128333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.128371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.143264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.143291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.152881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.152914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.166410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.166434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.175611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.175637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.187631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.187656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.203446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.203486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.065 [2024-12-07 01:06:41.213100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.065 [2024-12-07 01:06:41.213124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.226845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.226870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.236057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.236083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.250530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.250571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.259850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.259890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.275670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.275697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.285562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.285586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.299518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.299542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.309298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.309326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.323699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.323725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.332877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.332918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.346856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.346895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.356010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.356037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.368101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.368129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.382770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.382804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.392601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.392626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.406569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.406608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.416235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.416262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.428191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.428219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.443812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.443853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.453512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.453536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.324 [2024-12-07 01:06:41.467423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.324 [2024-12-07 01:06:41.467449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.476826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.476852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.491735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.491775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.501001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.501041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.515078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.515103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.524501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.524525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.540439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.540464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.557731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.557756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.567339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.567362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.579033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-07 01:06:41.579060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-07 01:06:41.589340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.589378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.601793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.601820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.611467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.611493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.623184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.623225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.633726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.633751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.646696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.646723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.655818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.655855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.667426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.667450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 11653.33 IOPS, 91.04 MiB/s [2024-12-07T00:06:41.734Z] [2024-12-07 01:06:41.677804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.677829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.692821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.692846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.708261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.708304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.583 [2024-12-07 01:06:41.717383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.583 [2024-12-07 01:06:41.717408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.732018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.732045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.742182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.742209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.753968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.754016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.764830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.764854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.779240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.779268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.789123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.789149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.803432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.803476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.814362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.814386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.825210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.825236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.839673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.839699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.849508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.849532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.863766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.863790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.873628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.873658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.885792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.885817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.898631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.898657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.907825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.907849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.919784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.919809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.935555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.935579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.945245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.945288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.958557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.958596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.968458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.968482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.841 [2024-12-07 01:06:41.979600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.841 [2024-12-07 01:06:41.979638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.842 [2024-12-07 01:06:41.990071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.842 [2024-12-07 01:06:41.990098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.002820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.002845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.012522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.012560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.028131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.028158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.037496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.037520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.052046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.052082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.069914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.069939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.083858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.083884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.093507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.093533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.107993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.108028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.118028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.118054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.129950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.129989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.140800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.140824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.155020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.155062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.164808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.164832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.176735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.176760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.191516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.191542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.201186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.201213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.215108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.215135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.223957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.224003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.239374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.239400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.100 [2024-12-07 01:06:42.248871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.100 [2024-12-07 01:06:42.248896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.264574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.264598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.279620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.279663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.289095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.289130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.303241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.303267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.312395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.312420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.328251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.328294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.337558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.337582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.349575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.349600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.360740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.360766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.373665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.373691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.383390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.383413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.394936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.394960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.405811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.405834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.416579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.416603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.432229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.432257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.441886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.441910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.453652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.453678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.464665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.464691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.477888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.477915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.487375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.487400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.359 [2024-12-07 01:06:42.499456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.359 [2024-12-07 01:06:42.499487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.509721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.509752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.521135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.521162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.533972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.534033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.543798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.543822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.555621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.555660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.570699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.570739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.580618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.580642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.594613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.594637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.604126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.604152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.616195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.616220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.630419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.630445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.639693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.639731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.655619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.655642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.665537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.665561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 11666.00 IOPS, 91.14 MiB/s [2024-12-07T00:06:42.769Z] [2024-12-07 01:06:42.679483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.679523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.689269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.689313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.704083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.704109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.714250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.714296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.725796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.725820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.736103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.736136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.752467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.752492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.618 [2024-12-07 01:06:42.761930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.618 [2024-12-07 01:06:42.761970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.773592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.773616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.784623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.784647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.800070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.800096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.809543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.809566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.821504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.821529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.836309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.836335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.845868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.845893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.857694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.857718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.870600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.870641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.880217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.880242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.891854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.891880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.909309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.909334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.923856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.923882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.933298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.933324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.949476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.949500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.877 [2024-12-07 01:06:42.963750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.877 [2024-12-07 01:06:42.963777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.878 [2024-12-07 01:06:42.973499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.878 [2024-12-07 01:06:42.973523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.878 [2024-12-07 01:06:42.989277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.878 [2024-12-07 01:06:42.989303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.878 [2024-12-07 01:06:43.003801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.878 [2024-12-07 01:06:43.003827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.878 [2024-12-07 01:06:43.013791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.878 [2024-12-07 01:06:43.013815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.878 [2024-12-07 01:06:43.025776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.878 [2024-12-07 01:06:43.025799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.038089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.038118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.047494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.047518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.059364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.059388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.070036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.070062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.081055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.081080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.096008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.096034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.105289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.105314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.118881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.118905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.128658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.128682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.144785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.144825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.158490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.158516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.167616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.167640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.179415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.179439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.136 [2024-12-07 01:06:43.189940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.136 [2024-12-07 01:06:43.189977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.201006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.201032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.214265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.214292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.223746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.223769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.235433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.235456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.246291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.246317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.257002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.257028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.137 [2024-12-07 01:06:43.272796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.137 [2024-12-07 01:06:43.272821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.290405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.290430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.300460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.300486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.316409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.316435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.326459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.326482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.338226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.338255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.348830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.348854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.364229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.364254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.373476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.373500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.387479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.387504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.397014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.397039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.410962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.411010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.420601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.420626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.435743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.435767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.445165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.445192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.459159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.459186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.469072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.469099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.484873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.484898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.499711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.499738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.509541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.509566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.521628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.395 [2024-12-07 01:06:43.521653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.395 [2024-12-07 01:06:43.532562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.396 [2024-12-07 01:06:43.532587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.548694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.548721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.564453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.564496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.573777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.573805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.585709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.585735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.596719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.596743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.609990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.610027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.621581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.621609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.636139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.636167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.645396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.645421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.661572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.661604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.673817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.673843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 11661.80 IOPS, 91.11 MiB/s [2024-12-07T00:06:43.805Z] [2024-12-07 01:06:43.683149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.683176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 00:39:27.654 Latency(us) 00:39:27.654 [2024-12-07T00:06:43.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.654 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:27.654 Nvme1n1 : 5.01 11670.68 91.18 0.00 0.00 10954.76 2985.53 19126.80 00:39:27.654 [2024-12-07T00:06:43.805Z] =================================================================================================================== 00:39:27.654 [2024-12-07T00:06:43.805Z] Total : 11670.68 91.18 0.00 0.00 10954.76 2985.53 19126.80 00:39:27.654 [2024-12-07 01:06:43.690499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.690524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.698335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.698372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.706371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.706416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.714374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.714419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.722375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.722419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.730368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.730410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.738361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.738401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.654 [2024-12-07 01:06:43.746374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.654 [2024-12-07 01:06:43.746418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.754368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.754410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.762368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.762410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.770374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.770419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.778373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.778418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.786376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.786419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.794368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.794426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.655 [2024-12-07 01:06:43.802369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.655 [2024-12-07 01:06:43.802412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.810381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.810424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.818340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.818378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.826330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.826357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.834373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.834420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.842370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.842413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.850326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.850359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.858326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.858360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 [2024-12-07 01:06:43.870329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.913 [2024-12-07 01:06:43.870362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (452615) - No such process 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 452615 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.913 delay0 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.913 01:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:27.913 [2024-12-07 01:06:44.029146] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:36.020 Initializing NVMe Controllers 00:39:36.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:36.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:36.020 Initialization complete. Launching workers. 00:39:36.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 18864 00:39:36.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18966, failed to submit 132 00:39:36.020 success 18898, unsuccessful 68, failed 0 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.021 rmmod nvme_tcp 00:39:36.021 rmmod nvme_fabrics 00:39:36.021 rmmod nvme_keyring 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 451299 ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 451299 ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 451299' 00:39:36.021 killing process with pid 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 451299 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.021 01:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.925 00:39:37.925 real 0m28.870s 00:39:37.925 user 0m41.218s 00:39:37.925 sys 0m10.022s 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:37.925 ************************************ 00:39:37.925 END TEST nvmf_zcopy 00:39:37.925 ************************************ 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.925 ************************************ 00:39:37.925 START TEST nvmf_nmic 00:39:37.925 ************************************ 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:37.925 * Looking for test storage... 00:39:37.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:39:37.925 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:37.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.926 --rc genhtml_branch_coverage=1 00:39:37.926 --rc genhtml_function_coverage=1 00:39:37.926 --rc genhtml_legend=1 00:39:37.926 --rc geninfo_all_blocks=1 00:39:37.926 --rc geninfo_unexecuted_blocks=1 00:39:37.926 00:39:37.926 ' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:37.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.926 --rc genhtml_branch_coverage=1 00:39:37.926 --rc genhtml_function_coverage=1 00:39:37.926 --rc genhtml_legend=1 00:39:37.926 --rc geninfo_all_blocks=1 00:39:37.926 --rc geninfo_unexecuted_blocks=1 00:39:37.926 00:39:37.926 ' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:37.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.926 --rc genhtml_branch_coverage=1 00:39:37.926 --rc genhtml_function_coverage=1 00:39:37.926 --rc genhtml_legend=1 00:39:37.926 --rc geninfo_all_blocks=1 00:39:37.926 --rc geninfo_unexecuted_blocks=1 00:39:37.926 00:39:37.926 ' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:37.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.926 --rc genhtml_branch_coverage=1 00:39:37.926 --rc genhtml_function_coverage=1 00:39:37.926 --rc genhtml_legend=1 00:39:37.926 --rc geninfo_all_blocks=1 00:39:37.926 --rc geninfo_unexecuted_blocks=1 00:39:37.926 00:39:37.926 ' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.926 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.927 01:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.833 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:39.834 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:39.834 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:39.834 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:39.834 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.834 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.092 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.092 01:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:39:40.092 00:39:40.092 --- 10.0.0.2 ping statistics --- 00:39:40.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.092 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:39:40.092 00:39:40.092 --- 10.0.0.1 ping statistics --- 00:39:40.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.092 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=456001 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 456001 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 456001 ']' 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.092 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.092 [2024-12-07 01:06:56.137484] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.092 [2024-12-07 01:06:56.138607] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:40.092 [2024-12-07 01:06:56.138658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.092 [2024-12-07 01:06:56.216182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:40.350 [2024-12-07 01:06:56.266944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.350 [2024-12-07 01:06:56.267014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.350 [2024-12-07 01:06:56.267030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.350 [2024-12-07 01:06:56.267056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.350 [2024-12-07 01:06:56.267066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.350 [2024-12-07 01:06:56.268750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.350 [2024-12-07 01:06:56.268835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:40.350 [2024-12-07 01:06:56.268838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.350 [2024-12-07 01:06:56.268790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.350 [2024-12-07 01:06:56.353822] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.350 [2024-12-07 01:06:56.353959] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:40.350 [2024-12-07 01:06:56.354250] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:40.350 [2024-12-07 01:06:56.354807] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.350 [2024-12-07 01:06:56.355036] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:40.350 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.350 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 [2024-12-07 01:06:56.409510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 Malloc0 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 [2024-12-07 01:06:56.481704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:40.351 test case1: single bdev can't be used in multiple subsystems 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.351 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.610 [2024-12-07 01:06:56.505475] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:40.610 [2024-12-07 01:06:56.505506] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:40.610 [2024-12-07 01:06:56.505520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.610 request: 00:39:40.610 { 00:39:40.610 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:40.610 "namespace": { 00:39:40.610 "bdev_name": "Malloc0", 00:39:40.610 "no_auto_visible": false, 00:39:40.610 "hide_metadata": false 00:39:40.610 }, 00:39:40.610 "method": "nvmf_subsystem_add_ns", 00:39:40.610 "req_id": 1 00:39:40.610 } 00:39:40.610 Got JSON-RPC error response 00:39:40.610 response: 00:39:40.610 { 00:39:40.610 "code": -32602, 00:39:40.610 "message": "Invalid parameters" 00:39:40.610 } 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:40.610 Adding namespace failed - expected result. 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:40.610 test case2: host connect to nvmf target in multiple paths 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:40.610 [2024-12-07 01:06:56.513556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:40.610 01:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:40.868 01:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:40.868 01:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:40.868 01:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:40.868 01:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:40.868 01:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:43.399 01:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:43.399 [global] 00:39:43.399 thread=1 00:39:43.399 invalidate=1 00:39:43.399 rw=write 00:39:43.399 time_based=1 00:39:43.399 runtime=1 00:39:43.399 ioengine=libaio 00:39:43.399 direct=1 00:39:43.399 bs=4096 00:39:43.399 iodepth=1 00:39:43.399 norandommap=0 00:39:43.399 numjobs=1 00:39:43.399 00:39:43.399 verify_dump=1 00:39:43.399 verify_backlog=512 00:39:43.399 verify_state_save=0 00:39:43.399 do_verify=1 00:39:43.399 verify=crc32c-intel 00:39:43.399 [job0] 00:39:43.399 filename=/dev/nvme0n1 00:39:43.399 Could not set queue depth (nvme0n1) 00:39:43.399 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:43.399 fio-3.35 00:39:43.399 Starting 1 thread 00:39:44.334 00:39:44.334 job0: (groupid=0, jobs=1): err= 0: pid=456499: Sat Dec 7 01:07:00 2024 00:39:44.334 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(100KiB/1041msec) 00:39:44.334 slat (nsec): min=7733, max=32994, avg=17779.00, stdev=6768.59 00:39:44.334 clat (usec): min=397, max=41067, avg=37736.92, stdev=11162.83 00:39:44.334 lat (usec): min=416, max=41077, avg=37754.70, stdev=11162.47 00:39:44.334 clat percentiles (usec): 00:39:44.334 | 1.00th=[ 400], 5.00th=[ 898], 10.00th=[40633], 20.00th=[41157], 00:39:44.334 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:44.334 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:44.334 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:44.334 | 99.99th=[41157] 00:39:44.334 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:39:44.334 slat (nsec): min=7529, max=70986, avg=16896.36, stdev=7487.00 00:39:44.334 clat (usec): min=144, max=317, avg=168.89, stdev=22.84 00:39:44.334 lat (usec): min=154, max=339, avg=185.79, stdev=26.67 00:39:44.334 clat percentiles (usec): 00:39:44.334 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:39:44.334 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:39:44.334 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 229], 00:39:44.334 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 318], 00:39:44.334 | 99.99th=[ 318] 00:39:44.334 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:44.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:44.334 lat (usec) : 250=92.36%, 500=3.17%, 1000=0.19% 00:39:44.334 lat (msec) : 50=4.28% 00:39:44.334 cpu : usr=0.19%, sys=1.06%, ctx=537, majf=0, minf=1 00:39:44.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:44.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:44.334 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:44.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:44.334 00:39:44.334 Run status group 0 (all jobs): 00:39:44.334 READ: bw=96.1KiB/s (98.4kB/s), 96.1KiB/s-96.1KiB/s (98.4kB/s-98.4kB/s), io=100KiB (102kB), run=1041-1041msec 00:39:44.334 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:39:44.334 00:39:44.334 Disk stats (read/write): 00:39:44.334 nvme0n1: ios=71/512, merge=0/0, ticks=797/77, in_queue=874, util=91.38% 00:39:44.334 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:44.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:44.593 rmmod nvme_tcp 00:39:44.593 rmmod nvme_fabrics 00:39:44.593 rmmod nvme_keyring 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 456001 ']' 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 456001 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 456001 ']' 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 456001 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456001 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456001' 00:39:44.593 killing process with pid 456001 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 456001 00:39:44.593 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 456001 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.853 01:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:47.405 00:39:47.405 real 0m9.223s 00:39:47.405 user 0m17.689s 00:39:47.405 sys 0m3.204s 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:47.405 ************************************ 00:39:47.405 END TEST nvmf_nmic 00:39:47.405 ************************************ 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:47.405 ************************************ 00:39:47.405 START TEST nvmf_fio_target 00:39:47.405 ************************************ 00:39:47.405 01:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:47.405 * Looking for test storage... 00:39:47.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.405 --rc genhtml_branch_coverage=1 00:39:47.405 --rc genhtml_function_coverage=1 00:39:47.405 --rc genhtml_legend=1 00:39:47.405 --rc geninfo_all_blocks=1 00:39:47.405 --rc geninfo_unexecuted_blocks=1 00:39:47.405 00:39:47.405 ' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.405 --rc genhtml_branch_coverage=1 00:39:47.405 --rc genhtml_function_coverage=1 00:39:47.405 --rc genhtml_legend=1 00:39:47.405 --rc geninfo_all_blocks=1 00:39:47.405 --rc geninfo_unexecuted_blocks=1 00:39:47.405 00:39:47.405 ' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.405 --rc genhtml_branch_coverage=1 00:39:47.405 --rc genhtml_function_coverage=1 00:39:47.405 --rc genhtml_legend=1 00:39:47.405 --rc geninfo_all_blocks=1 00:39:47.405 --rc geninfo_unexecuted_blocks=1 00:39:47.405 00:39:47.405 ' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:47.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:47.405 --rc genhtml_branch_coverage=1 00:39:47.405 --rc genhtml_function_coverage=1 00:39:47.405 --rc genhtml_legend=1 00:39:47.405 --rc geninfo_all_blocks=1 00:39:47.405 --rc geninfo_unexecuted_blocks=1 00:39:47.405 00:39:47.405 ' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.405 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:47.406 01:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:49.312 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:49.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:49.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:49.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:49.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:49.313 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:49.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:49.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:39:49.571 00:39:49.571 --- 10.0.0.2 ping statistics --- 00:39:49.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.571 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:49.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:49.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:39:49.571 00:39:49.571 --- 10.0.0.1 ping statistics --- 00:39:49.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.571 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=458690 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 458690 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 458690 ']' 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.571 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:49.571 [2024-12-07 01:07:05.566723] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:49.571 [2024-12-07 01:07:05.567752] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:49.571 [2024-12-07 01:07:05.567820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:49.571 [2024-12-07 01:07:05.640945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:49.571 [2024-12-07 01:07:05.687954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:49.571 [2024-12-07 01:07:05.688029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:49.571 [2024-12-07 01:07:05.688054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:49.571 [2024-12-07 01:07:05.688066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:49.571 [2024-12-07 01:07:05.688083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:49.571 [2024-12-07 01:07:05.689491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.571 [2024-12-07 01:07:05.689525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:49.571 [2024-12-07 01:07:05.689579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:49.571 [2024-12-07 01:07:05.689582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.829 [2024-12-07 01:07:05.776864] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:49.829 [2024-12-07 01:07:05.777134] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:49.829 [2024-12-07 01:07:05.777413] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:49.829 [2024-12-07 01:07:05.778101] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:49.829 [2024-12-07 01:07:05.778324] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:49.829 01:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:50.086 [2024-12-07 01:07:06.082309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.086 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:50.344 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:50.344 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:50.601 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:50.601 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:50.858 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:50.858 01:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:51.424 01:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:51.424 01:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:51.424 01:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:51.990 01:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:51.990 01:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:51.990 01:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:51.990 01:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:52.558 01:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:52.558 01:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:52.817 01:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:53.076 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:53.076 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:53.335 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:53.335 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:53.594 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.853 [2024-12-07 01:07:09.858536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.853 01:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:54.111 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:54.370 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:54.629 01:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:57.156 01:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:57.156 [global] 00:39:57.156 thread=1 00:39:57.156 invalidate=1 00:39:57.156 rw=write 00:39:57.156 time_based=1 00:39:57.156 runtime=1 00:39:57.156 ioengine=libaio 00:39:57.156 direct=1 00:39:57.156 bs=4096 00:39:57.156 iodepth=1 00:39:57.156 norandommap=0 00:39:57.156 numjobs=1 00:39:57.156 00:39:57.156 verify_dump=1 00:39:57.156 verify_backlog=512 00:39:57.156 verify_state_save=0 00:39:57.156 do_verify=1 00:39:57.156 verify=crc32c-intel 00:39:57.156 [job0] 00:39:57.156 filename=/dev/nvme0n1 00:39:57.156 [job1] 00:39:57.156 filename=/dev/nvme0n2 00:39:57.156 [job2] 00:39:57.156 filename=/dev/nvme0n3 00:39:57.156 [job3] 00:39:57.156 filename=/dev/nvme0n4 00:39:57.156 Could not set queue depth (nvme0n1) 00:39:57.156 Could not set queue depth (nvme0n2) 00:39:57.156 Could not set queue depth (nvme0n3) 00:39:57.156 Could not set queue depth (nvme0n4) 00:39:57.156 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.156 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.156 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.156 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.156 fio-3.35 00:39:57.156 Starting 4 threads 00:39:58.094 00:39:58.094 job0: (groupid=0, jobs=1): err= 0: pid=459641: Sat Dec 7 01:07:14 2024 00:39:58.094 read: IOPS=48, BW=194KiB/s (198kB/s)(200KiB/1032msec) 00:39:58.094 slat (nsec): min=6980, max=34638, avg=16566.52, stdev=6286.92 00:39:58.094 clat (usec): min=291, max=42273, avg=17628.35, stdev=20501.94 00:39:58.094 lat (usec): min=299, max=42289, avg=17644.92, stdev=20500.77 00:39:58.094 clat percentiles (usec): 00:39:58.094 | 1.00th=[ 293], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:39:58.094 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[40633], 00:39:58.094 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:39:58.094 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:58.094 | 99.99th=[42206] 00:39:58.094 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:39:58.094 slat (nsec): min=8909, max=71728, avg=21845.40, stdev=10611.02 00:39:58.094 clat (usec): min=173, max=429, avg=264.64, stdev=47.36 00:39:58.094 lat (usec): min=204, max=452, avg=286.48, stdev=43.88 00:39:58.094 clat percentiles (usec): 00:39:58.094 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 223], 00:39:58.094 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 273], 00:39:58.094 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 355], 00:39:58.094 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 429], 00:39:58.094 | 99.99th=[ 429] 00:39:58.094 bw ( KiB/s): min= 4096, max= 4096, per=28.19%, avg=4096.00, stdev= 0.00, samples=1 00:39:58.094 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:58.094 lat (usec) : 250=41.46%, 500=54.80% 00:39:58.094 lat (msec) : 50=3.74% 00:39:58.094 cpu : usr=0.48%, sys=1.16%, ctx=563, majf=0, minf=1 00:39:58.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.094 issued rwts: total=50,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.094 job1: (groupid=0, jobs=1): err= 0: pid=459642: Sat Dec 7 01:07:14 2024 00:39:58.094 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:39:58.094 slat (nsec): min=9769, max=33482, avg=16130.67, stdev=6010.47 00:39:58.094 clat (usec): min=40906, max=42036, avg=41595.30, stdev=490.57 00:39:58.094 lat (usec): min=40939, max=42048, avg=41611.43, stdev=487.19 00:39:58.094 clat percentiles (usec): 00:39:58.094 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:58.094 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:39:58.094 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:58.094 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:58.094 | 99.99th=[42206] 00:39:58.094 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:39:58.094 slat (nsec): min=8883, max=77283, avg=19719.34, stdev=10712.04 00:39:58.094 clat (usec): min=165, max=1127, avg=281.52, stdev=71.37 00:39:58.094 lat (usec): min=175, max=1149, avg=301.24, stdev=75.25 00:39:58.094 clat percentiles (usec): 00:39:58.094 | 1.00th=[ 174], 5.00th=[ 210], 10.00th=[ 231], 20.00th=[ 243], 00:39:58.094 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:39:58.094 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 412], 00:39:58.094 | 99.00th=[ 469], 99.50th=[ 506], 99.90th=[ 1123], 99.95th=[ 1123], 00:39:58.094 | 99.99th=[ 1123] 00:39:58.094 bw ( KiB/s): min= 4096, max= 4096, per=28.19%, avg=4096.00, stdev= 0.00, samples=1 00:39:58.094 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:58.094 lat (usec) : 250=24.58%, 500=70.92%, 750=0.19%, 1000=0.19% 00:39:58.094 lat (msec) : 2=0.19%, 50=3.94% 00:39:58.094 cpu : usr=0.58%, sys=1.36%, ctx=533, majf=0, minf=1 00:39:58.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.094 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.094 job2: (groupid=0, jobs=1): err= 0: pid=459643: Sat Dec 7 01:07:14 2024 00:39:58.094 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:58.094 slat (nsec): min=6794, max=27283, avg=7636.74, stdev=929.75 00:39:58.094 clat (usec): min=167, max=520, avg=240.94, stdev=42.93 00:39:58.095 lat (usec): min=174, max=534, avg=248.57, stdev=43.29 00:39:58.095 clat percentiles (usec): 00:39:58.095 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 204], 20.00th=[ 208], 00:39:58.095 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 235], 00:39:58.095 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 310], 00:39:58.095 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 502], 99.95th=[ 502], 00:39:58.095 | 99.99th=[ 523] 00:39:58.095 write: IOPS=2210, BW=8843KiB/s (9055kB/s)(8852KiB/1001msec); 0 zone resets 00:39:58.095 slat (nsec): min=8578, max=45614, avg=10509.61, stdev=2740.20 00:39:58.095 clat (usec): min=136, max=1041, avg=206.60, stdev=65.36 00:39:58.095 lat (usec): min=145, max=1051, avg=217.11, stdev=66.45 00:39:58.095 clat percentiles (usec): 00:39:58.095 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:39:58.095 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 194], 60.00th=[ 200], 00:39:58.095 | 70.00th=[ 239], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 326], 00:39:58.095 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 734], 99.95th=[ 766], 00:39:58.095 | 99.99th=[ 1045] 00:39:58.095 bw ( KiB/s): min= 8192, max= 8192, per=56.38%, avg=8192.00, stdev= 0.00, samples=1 00:39:58.095 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:58.095 lat (usec) : 250=71.98%, 500=27.86%, 750=0.12%, 1000=0.02% 00:39:58.095 lat (msec) : 2=0.02% 00:39:58.095 cpu : usr=3.40%, sys=4.40%, ctx=4263, majf=0, minf=1 00:39:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.095 issued rwts: total=2048,2213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.095 job3: (groupid=0, jobs=1): err= 0: pid=459644: Sat Dec 7 01:07:14 2024 00:39:58.095 read: IOPS=483, BW=1936KiB/s (1982kB/s)(1992KiB/1029msec) 00:39:58.095 slat (nsec): min=6200, max=72808, avg=22530.81, stdev=11041.06 00:39:58.095 clat (usec): min=273, max=41087, avg=1717.11, stdev=7157.75 00:39:58.095 lat (usec): min=298, max=41099, avg=1739.64, stdev=7156.35 00:39:58.095 clat percentiles (usec): 00:39:58.095 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 351], 00:39:58.095 | 30.00th=[ 367], 40.00th=[ 383], 50.00th=[ 408], 60.00th=[ 433], 00:39:58.095 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:39:58.095 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:58.095 | 99.99th=[41157] 00:39:58.095 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:39:58.095 slat (nsec): min=8060, max=74415, avg=21191.30, stdev=10301.34 00:39:58.095 clat (usec): min=188, max=468, avg=283.54, stdev=52.34 00:39:58.095 lat (usec): min=206, max=481, avg=304.73, stdev=49.78 00:39:58.095 clat percentiles (usec): 00:39:58.095 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 235], 00:39:58.095 | 30.00th=[ 247], 40.00th=[ 262], 50.00th=[ 281], 60.00th=[ 297], 00:39:58.095 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 371], 00:39:58.095 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 469], 99.95th=[ 469], 00:39:58.095 | 99.99th=[ 469] 00:39:58.095 bw ( KiB/s): min= 4096, max= 4096, per=28.19%, avg=4096.00, stdev= 0.00, samples=1 00:39:58.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:58.095 lat (usec) : 250=15.74%, 500=75.15%, 750=7.33%, 1000=0.20% 00:39:58.095 lat (msec) : 50=1.58% 00:39:58.095 cpu : usr=1.07%, sys=2.24%, ctx=1011, majf=0, minf=1 00:39:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.095 issued rwts: total=498,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.095 00:39:58.095 Run status group 0 (all jobs): 00:39:58.095 READ: bw=9.91MiB/s (10.4MB/s), 81.5KiB/s-8184KiB/s (83.4kB/s-8380kB/s), io=10.2MiB (10.7MB), run=1001-1032msec 00:39:58.095 WRITE: bw=14.2MiB/s (14.9MB/s), 1984KiB/s-8843KiB/s (2032kB/s-9055kB/s), io=14.6MiB (15.4MB), run=1001-1032msec 00:39:58.095 00:39:58.095 Disk stats (read/write): 00:39:58.095 nvme0n1: ios=88/512, merge=0/0, ticks=1020/132, in_queue=1152, util=85.97% 00:39:58.095 nvme0n2: ios=66/512, merge=0/0, ticks=734/136, in_queue=870, util=91.36% 00:39:58.095 nvme0n3: ios=1593/2027, merge=0/0, ticks=829/403, in_queue=1232, util=93.75% 00:39:58.095 nvme0n4: ios=550/512, merge=0/0, ticks=709/136, in_queue=845, util=95.49% 00:39:58.095 01:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:58.353 [global] 00:39:58.353 thread=1 00:39:58.353 invalidate=1 00:39:58.353 rw=randwrite 00:39:58.353 time_based=1 00:39:58.353 runtime=1 00:39:58.353 ioengine=libaio 00:39:58.353 direct=1 00:39:58.353 bs=4096 00:39:58.353 iodepth=1 00:39:58.353 norandommap=0 00:39:58.353 numjobs=1 00:39:58.353 00:39:58.353 verify_dump=1 00:39:58.353 verify_backlog=512 00:39:58.353 verify_state_save=0 00:39:58.353 do_verify=1 00:39:58.353 verify=crc32c-intel 00:39:58.353 [job0] 00:39:58.353 filename=/dev/nvme0n1 00:39:58.353 [job1] 00:39:58.353 filename=/dev/nvme0n2 00:39:58.353 [job2] 00:39:58.353 filename=/dev/nvme0n3 00:39:58.353 [job3] 00:39:58.353 filename=/dev/nvme0n4 00:39:58.353 Could not set queue depth (nvme0n1) 00:39:58.353 Could not set queue depth (nvme0n2) 00:39:58.353 Could not set queue depth (nvme0n3) 00:39:58.353 Could not set queue depth (nvme0n4) 00:39:58.353 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:58.353 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:58.353 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:58.353 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:58.353 fio-3.35 00:39:58.353 Starting 4 threads 00:39:59.726 00:39:59.726 job0: (groupid=0, jobs=1): err= 0: pid=459890: Sat Dec 7 01:07:15 2024 00:39:59.726 read: IOPS=625, BW=2500KiB/s (2560kB/s)(2520KiB/1008msec) 00:39:59.726 slat (nsec): min=5931, max=53962, avg=14928.83, stdev=6712.13 00:39:59.726 clat (usec): min=217, max=41237, avg=1165.68, stdev=5755.72 00:39:59.726 lat (usec): min=227, max=41254, avg=1180.61, stdev=5756.13 00:39:59.726 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 243], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:39:59.727 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:39:59.727 | 70.00th=[ 343], 80.00th=[ 375], 90.00th=[ 449], 95.00th=[ 494], 00:39:59.727 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:59.727 | 99.99th=[41157] 00:39:59.727 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:39:59.727 slat (nsec): min=6687, max=58027, avg=15377.32, stdev=8541.20 00:39:59.727 clat (usec): min=147, max=776, avg=233.06, stdev=48.96 00:39:59.727 lat (usec): min=157, max=790, avg=248.44, stdev=48.48 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 157], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 198], 00:39:59.727 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 237], 00:39:59.727 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 314], 00:39:59.727 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 766], 99.95th=[ 775], 00:39:59.727 | 99.99th=[ 775] 00:39:59.727 bw ( KiB/s): min= 8175, max= 8175, per=37.05%, avg=8175.00, stdev= 0.00, samples=1 00:39:59.727 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:39:59.727 lat (usec) : 250=44.07%, 500=53.99%, 750=1.03%, 1000=0.12% 00:39:59.727 lat (msec) : 50=0.79% 00:39:59.727 cpu : usr=1.39%, sys=3.97%, ctx=1655, majf=0, minf=1 00:39:59.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 issued rwts: total=630,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:59.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:59.727 job1: (groupid=0, jobs=1): err= 0: pid=459908: Sat Dec 7 01:07:15 2024 00:39:59.727 read: IOPS=1763, BW=7056KiB/s (7225kB/s)(7204KiB/1021msec) 00:39:59.727 slat (nsec): min=4542, max=72812, avg=14417.00, stdev=7608.90 00:39:59.727 clat (usec): min=184, max=42128, avg=304.70, stdev=989.69 00:39:59.727 lat (usec): min=195, max=42147, avg=319.11, stdev=990.06 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:39:59.727 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:39:59.727 | 70.00th=[ 269], 80.00th=[ 318], 90.00th=[ 445], 95.00th=[ 469], 00:39:59.727 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 635], 99.95th=[42206], 00:39:59.727 | 99.99th=[42206] 00:39:59.727 write: IOPS=2005, BW=8024KiB/s (8216kB/s)(8192KiB/1021msec); 0 zone resets 00:39:59.727 slat (nsec): min=6387, max=53600, avg=15369.22, stdev=5926.50 00:39:59.727 clat (usec): min=134, max=475, avg=194.03, stdev=49.93 00:39:59.727 lat (usec): min=141, max=497, avg=209.40, stdev=50.49 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 159], 00:39:59.727 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 186], 00:39:59.727 | 70.00th=[ 210], 80.00th=[ 229], 90.00th=[ 260], 95.00th=[ 302], 00:39:59.727 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 441], 00:39:59.727 | 99.99th=[ 478] 00:39:59.727 bw ( KiB/s): min= 8175, max= 8192, per=37.09%, avg=8183.50, stdev=12.02, samples=2 00:39:59.727 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:39:59.727 lat (usec) : 250=73.79%, 500=24.84%, 750=1.35% 00:39:59.727 lat (msec) : 50=0.03% 00:39:59.727 cpu : usr=2.25%, sys=6.76%, ctx=3851, majf=0, minf=1 00:39:59.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 issued rwts: total=1801,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:59.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:59.727 job2: (groupid=0, jobs=1): err= 0: pid=459938: Sat Dec 7 01:07:15 2024 00:39:59.727 read: IOPS=22, BW=90.7KiB/s (92.9kB/s)(92.0KiB/1014msec) 00:39:59.727 slat (nsec): min=8355, max=33719, avg=24191.35, stdev=9300.44 00:39:59.727 clat (usec): min=263, max=42033, avg=39413.47, stdev=8545.27 00:39:59.727 lat (usec): min=297, max=42049, avg=39437.66, stdev=8543.23 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:59.727 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:59.727 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:39:59.727 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:59.727 | 99.99th=[42206] 00:39:59.727 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:39:59.727 slat (nsec): min=6102, max=42760, avg=11970.37, stdev=5652.72 00:39:59.727 clat (usec): min=161, max=263, avg=192.11, stdev=13.61 00:39:59.727 lat (usec): min=169, max=306, avg=204.08, stdev=14.85 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:39:59.727 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:39:59.727 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 215], 00:39:59.727 | 99.00th=[ 235], 99.50th=[ 239], 99.90th=[ 265], 99.95th=[ 265], 00:39:59.727 | 99.99th=[ 265] 00:39:59.727 bw ( KiB/s): min= 4087, max= 4087, per=18.52%, avg=4087.00, stdev= 0.00, samples=1 00:39:59.727 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:59.727 lat (usec) : 250=95.51%, 500=0.37% 00:39:59.727 lat (msec) : 50=4.11% 00:39:59.727 cpu : usr=0.10%, sys=0.79%, ctx=535, majf=0, minf=1 00:39:59.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:59.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:59.727 job3: (groupid=0, jobs=1): err= 0: pid=459948: Sat Dec 7 01:07:15 2024 00:39:59.727 read: IOPS=1768, BW=7073KiB/s (7243kB/s)(7080KiB/1001msec) 00:39:59.727 slat (nsec): min=5956, max=51925, avg=13308.92, stdev=5600.51 00:39:59.727 clat (usec): min=201, max=1473, avg=274.25, stdev=45.25 00:39:59.727 lat (usec): min=207, max=1490, avg=287.56, stdev=46.71 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:39:59.727 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:39:59.727 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:39:59.727 | 99.00th=[ 469], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 1467], 00:39:59.727 | 99.99th=[ 1467] 00:39:59.727 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:59.727 slat (nsec): min=7670, max=54538, avg=19369.32, stdev=6334.48 00:39:59.727 clat (usec): min=160, max=392, avg=211.86, stdev=16.39 00:39:59.727 lat (usec): min=168, max=417, avg=231.23, stdev=19.68 00:39:59.727 clat percentiles (usec): 00:39:59.727 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 202], 00:39:59.727 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:39:59.727 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 237], 00:39:59.727 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:39:59.727 | 99.99th=[ 392] 00:39:59.727 bw ( KiB/s): min= 8175, max= 8175, per=37.05%, avg=8175.00, stdev= 0.00, samples=1 00:39:59.727 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:39:59.727 lat (usec) : 250=59.43%, 500=40.20%, 750=0.34% 00:39:59.727 lat (msec) : 2=0.03% 00:39:59.727 cpu : usr=4.40%, sys=8.70%, ctx=3821, majf=0, minf=1 00:39:59.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:59.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:59.727 issued rwts: total=1770,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:59.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:59.727 00:39:59.727 Run status group 0 (all jobs): 00:39:59.727 READ: bw=16.2MiB/s (16.9MB/s), 90.7KiB/s-7073KiB/s (92.9kB/s-7243kB/s), io=16.5MiB (17.3MB), run=1001-1021msec 00:39:59.727 WRITE: bw=21.5MiB/s (22.6MB/s), 2020KiB/s-8184KiB/s (2068kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1021msec 00:39:59.727 00:39:59.727 Disk stats (read/write): 00:39:59.727 nvme0n1: ios=583/1024, merge=0/0, ticks=1528/230, in_queue=1758, util=97.70% 00:39:59.727 nvme0n2: ios=1575/1838, merge=0/0, ticks=1376/345, in_queue=1721, util=96.34% 00:39:59.727 nvme0n3: ios=19/512, merge=0/0, ticks=743/98, in_queue=841, util=88.82% 00:39:59.727 nvme0n4: ios=1586/1688, merge=0/0, ticks=741/343, in_queue=1084, util=99.79% 00:39:59.727 01:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:59.727 [global] 00:39:59.727 thread=1 00:39:59.727 invalidate=1 00:39:59.727 rw=write 00:39:59.728 time_based=1 00:39:59.728 runtime=1 00:39:59.728 ioengine=libaio 00:39:59.728 direct=1 00:39:59.728 bs=4096 00:39:59.728 iodepth=128 00:39:59.728 norandommap=0 00:39:59.728 numjobs=1 00:39:59.728 00:39:59.728 verify_dump=1 00:39:59.728 verify_backlog=512 00:39:59.728 verify_state_save=0 00:39:59.728 do_verify=1 00:39:59.728 verify=crc32c-intel 00:39:59.728 [job0] 00:39:59.728 filename=/dev/nvme0n1 00:39:59.728 [job1] 00:39:59.728 filename=/dev/nvme0n2 00:39:59.728 [job2] 00:39:59.728 filename=/dev/nvme0n3 00:39:59.728 [job3] 00:39:59.728 filename=/dev/nvme0n4 00:39:59.728 Could not set queue depth (nvme0n1) 00:39:59.728 Could not set queue depth (nvme0n2) 00:39:59.728 Could not set queue depth (nvme0n3) 00:39:59.728 Could not set queue depth (nvme0n4) 00:39:59.986 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:59.986 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:59.986 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:59.986 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:59.986 fio-3.35 00:39:59.986 Starting 4 threads 00:40:01.362 00:40:01.362 job0: (groupid=0, jobs=1): err= 0: pid=460216: Sat Dec 7 01:07:17 2024 00:40:01.362 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:40:01.362 slat (usec): min=2, max=10832, avg=93.31, stdev=776.46 00:40:01.362 clat (usec): min=3121, max=22321, avg=11946.07, stdev=2638.22 00:40:01.362 lat (usec): min=3126, max=27888, avg=12039.38, stdev=2732.42 00:40:01.362 clat percentiles (usec): 00:40:01.362 | 1.00th=[ 7111], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:40:01.362 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:40:01.362 | 70.00th=[11863], 80.00th=[13435], 90.00th=[16057], 95.00th=[17695], 00:40:01.362 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:40:01.362 | 99.99th=[22414] 00:40:01.362 write: IOPS=5624, BW=22.0MiB/s (23.0MB/s)(22.2MiB/1010msec); 0 zone resets 00:40:01.362 slat (usec): min=3, max=9565, avg=79.14, stdev=637.61 00:40:01.362 clat (usec): min=1174, max=22180, avg=10713.14, stdev=2448.23 00:40:01.362 lat (usec): min=1185, max=22187, avg=10792.29, stdev=2493.46 00:40:01.362 clat percentiles (usec): 00:40:01.362 | 1.00th=[ 3916], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 9503], 00:40:01.362 | 30.00th=[10028], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:40:01.362 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13435], 95.00th=[15139], 00:40:01.362 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20317], 99.95th=[21890], 00:40:01.362 | 99.99th=[22152] 00:40:01.362 bw ( KiB/s): min=21112, max=23944, per=35.77%, avg=22528.00, stdev=2002.53, samples=2 00:40:01.362 iops : min= 5278, max= 5986, avg=5632.00, stdev=500.63, samples=2 00:40:01.362 lat (msec) : 2=0.09%, 4=0.55%, 10=20.08%, 20=78.25%, 50=1.03% 00:40:01.362 cpu : usr=4.06%, sys=6.54%, ctx=307, majf=0, minf=1 00:40:01.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:01.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:01.362 issued rwts: total=5632,5681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:01.362 job1: (groupid=0, jobs=1): err= 0: pid=460217: Sat Dec 7 01:07:17 2024 00:40:01.362 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:40:01.362 slat (usec): min=2, max=17863, avg=138.15, stdev=1015.22 00:40:01.362 clat (usec): min=4680, max=57791, avg=16504.92, stdev=7081.59 00:40:01.362 lat (usec): min=4690, max=57797, avg=16643.08, stdev=7177.38 00:40:01.362 clat percentiles (usec): 00:40:01.362 | 1.00th=[ 8848], 5.00th=[11338], 10.00th=[11600], 20.00th=[12125], 00:40:01.362 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13960], 60.00th=[16581], 00:40:01.362 | 70.00th=[17433], 80.00th=[18482], 90.00th=[23725], 95.00th=[31851], 00:40:01.362 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51643], 99.95th=[57934], 00:40:01.362 | 99.99th=[57934] 00:40:01.362 write: IOPS=3615, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1010msec); 0 zone resets 00:40:01.362 slat (usec): min=4, max=14244, avg=132.49, stdev=844.60 00:40:01.362 clat (usec): min=1159, max=61238, avg=18837.08, stdev=11333.04 00:40:01.362 lat (usec): min=1166, max=61246, avg=18969.57, stdev=11398.44 00:40:01.362 clat percentiles (usec): 00:40:01.362 | 1.00th=[ 4948], 5.00th=[10028], 10.00th=[10945], 20.00th=[11469], 00:40:01.362 | 30.00th=[11731], 40.00th=[12518], 50.00th=[14484], 60.00th=[15533], 00:40:01.362 | 70.00th=[20579], 80.00th=[24249], 90.00th=[38011], 95.00th=[45876], 00:40:01.362 | 99.00th=[56361], 99.50th=[57934], 99.90th=[61080], 99.95th=[61080], 00:40:01.362 | 99.99th=[61080] 00:40:01.362 bw ( KiB/s): min=12688, max=15984, per=22.76%, avg=14336.00, stdev=2330.62, samples=2 00:40:01.362 iops : min= 3172, max= 3996, avg=3584.00, stdev=582.66, samples=2 00:40:01.362 lat (msec) : 2=0.15%, 4=0.08%, 10=4.16%, 20=72.22%, 50=21.24% 00:40:01.362 lat (msec) : 100=2.14% 00:40:01.362 cpu : usr=2.38%, sys=4.96%, ctx=303, majf=0, minf=1 00:40:01.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:01.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:01.362 issued rwts: total=3584,3652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:01.362 job2: (groupid=0, jobs=1): err= 0: pid=460218: Sat Dec 7 01:07:17 2024 00:40:01.362 read: IOPS=3168, BW=12.4MiB/s (13.0MB/s)(13.0MiB/1048msec) 00:40:01.362 slat (usec): min=2, max=12618, avg=86.94, stdev=708.60 00:40:01.362 clat (usec): min=1413, max=65296, avg=15023.10, stdev=9027.39 00:40:01.362 lat (usec): min=1480, max=65300, avg=15110.03, stdev=9071.05 00:40:01.362 clat percentiles (usec): 00:40:01.362 | 1.00th=[ 2114], 5.00th=[ 5080], 10.00th=[ 9372], 20.00th=[10290], 00:40:01.362 | 30.00th=[11338], 40.00th=[12780], 50.00th=[14353], 60.00th=[15008], 00:40:01.362 | 70.00th=[15401], 80.00th=[15664], 90.00th=[19006], 95.00th=[25297], 00:40:01.362 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[65274], 00:40:01.362 | 99.99th=[65274] 00:40:01.362 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1048msec); 0 zone resets 00:40:01.362 slat (usec): min=3, max=13199, avg=171.22, stdev=1049.17 00:40:01.362 clat (usec): min=989, max=131155, avg=23132.92, stdev=26161.78 00:40:01.362 lat (usec): min=994, max=131165, avg=23304.13, stdev=26344.20 00:40:01.362 clat percentiles (msec): 00:40:01.362 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:40:01.362 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:40:01.362 | 70.00th=[ 17], 80.00th=[ 25], 90.00th=[ 57], 95.00th=[ 104], 00:40:01.362 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:40:01.362 | 99.99th=[ 132] 00:40:01.362 bw ( KiB/s): min= 8824, max=19848, per=22.76%, avg=14336.00, stdev=7795.15, samples=2 00:40:01.362 iops : min= 2206, max= 4962, avg=3584.00, stdev=1948.79, samples=2 00:40:01.363 lat (usec) : 1000=0.10% 00:40:01.363 lat (msec) : 2=0.26%, 4=1.59%, 10=13.64%, 20=67.50%, 50=8.79% 00:40:01.363 lat (msec) : 100=5.27%, 250=2.84% 00:40:01.363 cpu : usr=1.91%, sys=3.72%, ctx=325, majf=0, minf=1 00:40:01.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:01.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:01.363 issued rwts: total=3321,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:01.363 job3: (groupid=0, jobs=1): err= 0: pid=460219: Sat Dec 7 01:07:17 2024 00:40:01.363 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1006msec) 00:40:01.363 slat (usec): min=2, max=22002, avg=140.83, stdev=1028.87 00:40:01.363 clat (usec): min=5655, max=58522, avg=18214.83, stdev=10626.30 00:40:01.363 lat (usec): min=5661, max=58528, avg=18355.66, stdev=10695.16 00:40:01.363 clat percentiles (usec): 00:40:01.363 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[11731], 20.00th=[12125], 00:40:01.363 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[14615], 00:40:01.363 | 70.00th=[16188], 80.00th=[21890], 90.00th=[34866], 95.00th=[42730], 00:40:01.363 | 99.00th=[54264], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:40:01.363 | 99.99th=[58459] 00:40:01.363 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:40:01.363 slat (usec): min=3, max=27336, avg=134.80, stdev=1100.47 00:40:01.363 clat (usec): min=1683, max=69679, avg=18305.87, stdev=10028.47 00:40:01.363 lat (usec): min=1693, max=69683, avg=18440.67, stdev=10121.83 00:40:01.363 clat percentiles (usec): 00:40:01.363 | 1.00th=[ 7308], 5.00th=[ 9896], 10.00th=[11731], 20.00th=[12649], 00:40:01.363 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:40:01.363 | 70.00th=[17957], 80.00th=[22676], 90.00th=[35914], 95.00th=[40633], 00:40:01.363 | 99.00th=[53740], 99.50th=[53740], 99.90th=[69731], 99.95th=[69731], 00:40:01.363 | 99.99th=[69731] 00:40:01.363 bw ( KiB/s): min= 9888, max=18784, per=22.76%, avg=14336.00, stdev=6290.42, samples=2 00:40:01.363 iops : min= 2472, max= 4696, avg=3584.00, stdev=1572.61, samples=2 00:40:01.363 lat (msec) : 2=0.07%, 10=3.80%, 20=72.64%, 50=21.38%, 100=2.11% 00:40:01.363 cpu : usr=2.59%, sys=3.18%, ctx=246, majf=0, minf=1 00:40:01.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:01.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:01.363 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:01.363 00:40:01.363 Run status group 0 (all jobs): 00:40:01.363 READ: bw=59.4MiB/s (62.3MB/s), 12.4MiB/s-21.8MiB/s (13.0MB/s-22.8MB/s), io=62.2MiB (65.2MB), run=1006-1048msec 00:40:01.363 WRITE: bw=61.5MiB/s (64.5MB/s), 13.4MiB/s-22.0MiB/s (14.0MB/s-23.0MB/s), io=64.5MiB (67.6MB), run=1006-1048msec 00:40:01.363 00:40:01.363 Disk stats (read/write): 00:40:01.363 nvme0n1: ios=4658/4843, merge=0/0, ticks=53858/50312, in_queue=104170, util=87.27% 00:40:01.363 nvme0n2: ios=2880/3072, merge=0/0, ticks=45890/59340, in_queue=105230, util=97.86% 00:40:01.363 nvme0n3: ios=2607/2655, merge=0/0, ticks=27740/44446, in_queue=72186, util=96.45% 00:40:01.363 nvme0n4: ios=3090/3370, merge=0/0, ticks=23872/27180, in_queue=51052, util=97.90% 00:40:01.363 01:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:01.363 [global] 00:40:01.363 thread=1 00:40:01.363 invalidate=1 00:40:01.363 rw=randwrite 00:40:01.363 time_based=1 00:40:01.363 runtime=1 00:40:01.363 ioengine=libaio 00:40:01.363 direct=1 00:40:01.363 bs=4096 00:40:01.363 iodepth=128 00:40:01.363 norandommap=0 00:40:01.363 numjobs=1 00:40:01.363 00:40:01.363 verify_dump=1 00:40:01.363 verify_backlog=512 00:40:01.363 verify_state_save=0 00:40:01.363 do_verify=1 00:40:01.363 verify=crc32c-intel 00:40:01.363 [job0] 00:40:01.363 filename=/dev/nvme0n1 00:40:01.363 [job1] 00:40:01.363 filename=/dev/nvme0n2 00:40:01.363 [job2] 00:40:01.363 filename=/dev/nvme0n3 00:40:01.363 [job3] 00:40:01.363 filename=/dev/nvme0n4 00:40:01.363 Could not set queue depth (nvme0n1) 00:40:01.363 Could not set queue depth (nvme0n2) 00:40:01.363 Could not set queue depth (nvme0n3) 00:40:01.363 Could not set queue depth (nvme0n4) 00:40:01.363 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:01.363 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:01.363 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:01.363 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:01.363 fio-3.35 00:40:01.363 Starting 4 threads 00:40:02.741 00:40:02.741 job0: (groupid=0, jobs=1): err= 0: pid=460448: Sat Dec 7 01:07:18 2024 00:40:02.741 read: IOPS=1919, BW=7677KiB/s (7862kB/s)(7708KiB/1004msec) 00:40:02.741 slat (usec): min=2, max=24078, avg=232.37, stdev=1662.65 00:40:02.741 clat (usec): min=3574, max=70061, avg=29408.87, stdev=14458.79 00:40:02.741 lat (usec): min=3577, max=70072, avg=29641.24, stdev=14565.36 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 5997], 5.00th=[10683], 10.00th=[12256], 20.00th=[12649], 00:40:02.741 | 30.00th=[16712], 40.00th=[25035], 50.00th=[30016], 60.00th=[35390], 00:40:02.741 | 70.00th=[37487], 80.00th=[42730], 90.00th=[47449], 95.00th=[57934], 00:40:02.741 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[69731], 00:40:02.741 | 99.99th=[69731] 00:40:02.741 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:40:02.741 slat (usec): min=3, max=26765, avg=256.99, stdev=1509.89 00:40:02.741 clat (usec): min=843, max=99160, avg=34539.82, stdev=17513.81 00:40:02.741 lat (usec): min=882, max=99173, avg=34796.81, stdev=17615.49 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 4555], 5.00th=[15270], 10.00th=[20579], 20.00th=[22414], 00:40:02.741 | 30.00th=[22676], 40.00th=[22938], 50.00th=[29754], 60.00th=[34341], 00:40:02.741 | 70.00th=[40633], 80.00th=[45876], 90.00th=[59507], 95.00th=[65799], 00:40:02.741 | 99.00th=[95945], 99.50th=[98042], 99.90th=[99091], 99.95th=[99091], 00:40:02.741 | 99.99th=[99091] 00:40:02.741 bw ( KiB/s): min= 8136, max= 8248, per=15.00%, avg=8192.00, stdev=79.20, samples=2 00:40:02.741 iops : min= 2034, max= 2062, avg=2048.00, stdev=19.80, samples=2 00:40:02.741 lat (usec) : 1000=0.05% 00:40:02.741 lat (msec) : 2=0.18%, 4=0.48%, 10=1.53%, 20=18.94%, 50=67.32% 00:40:02.741 lat (msec) : 100=11.50% 00:40:02.741 cpu : usr=1.89%, sys=2.19%, ctx=243, majf=0, minf=1 00:40:02.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:40:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:02.741 issued rwts: total=1927,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:02.741 job1: (groupid=0, jobs=1): err= 0: pid=460449: Sat Dec 7 01:07:18 2024 00:40:02.741 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:40:02.741 slat (usec): min=3, max=20261, avg=182.02, stdev=1272.44 00:40:02.741 clat (usec): min=3129, max=93711, avg=21143.27, stdev=16529.04 00:40:02.741 lat (usec): min=3137, max=93724, avg=21325.30, stdev=16687.29 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 4948], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11207], 00:40:02.741 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13173], 60.00th=[13829], 00:40:02.741 | 70.00th=[17433], 80.00th=[34341], 90.00th=[48497], 95.00th=[53740], 00:40:02.741 | 99.00th=[79168], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:40:02.741 | 99.99th=[93848] 00:40:02.741 write: IOPS=2666, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1002msec); 0 zone resets 00:40:02.741 slat (usec): min=4, max=28185, avg=187.97, stdev=1425.72 00:40:02.741 clat (usec): min=405, max=94745, avg=27337.39, stdev=21505.83 00:40:02.741 lat (usec): min=3370, max=94777, avg=27525.36, stdev=21675.95 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 3621], 5.00th=[ 7635], 10.00th=[10421], 20.00th=[11076], 00:40:02.741 | 30.00th=[11600], 40.00th=[17433], 50.00th=[20579], 60.00th=[22676], 00:40:02.741 | 70.00th=[22938], 80.00th=[50070], 90.00th=[63701], 95.00th=[76022], 00:40:02.741 | 99.00th=[80217], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:40:02.741 | 99.99th=[94897] 00:40:02.741 bw ( KiB/s): min= 8192, max=12288, per=18.75%, avg=10240.00, stdev=2896.31, samples=2 00:40:02.741 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:40:02.741 lat (usec) : 500=0.02% 00:40:02.741 lat (msec) : 4=1.19%, 10=5.31%, 20=52.06%, 50=27.06%, 100=14.35% 00:40:02.741 cpu : usr=3.50%, sys=4.50%, ctx=292, majf=0, minf=1 00:40:02.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:40:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:02.741 issued rwts: total=2560,2672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:02.741 job2: (groupid=0, jobs=1): err= 0: pid=460450: Sat Dec 7 01:07:18 2024 00:40:02.741 read: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(20.0MiB/1018msec) 00:40:02.741 slat (usec): min=3, max=12568, avg=99.50, stdev=799.75 00:40:02.741 clat (usec): min=4415, max=26489, avg=12770.04, stdev=3182.50 00:40:02.741 lat (usec): min=4431, max=26829, avg=12869.54, stdev=3251.16 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10159], 00:40:02.741 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11863], 60.00th=[12911], 00:40:02.741 | 70.00th=[14222], 80.00th=[15533], 90.00th=[17171], 95.00th=[19268], 00:40:02.741 | 99.00th=[21627], 99.50th=[23200], 99.90th=[24511], 99.95th=[24511], 00:40:02.741 | 99.99th=[26608] 00:40:02.741 write: IOPS=5208, BW=20.3MiB/s (21.3MB/s)(20.7MiB/1018msec); 0 zone resets 00:40:02.741 slat (usec): min=4, max=10200, avg=82.28, stdev=626.38 00:40:02.741 clat (usec): min=3717, max=47451, avg=12033.74, stdev=4942.99 00:40:02.741 lat (usec): min=3730, max=47460, avg=12116.02, stdev=4978.56 00:40:02.741 clat percentiles (usec): 00:40:02.741 | 1.00th=[ 5800], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 9634], 00:40:02.741 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11600], 60.00th=[11994], 00:40:02.741 | 70.00th=[12649], 80.00th=[13435], 90.00th=[15008], 95.00th=[17171], 00:40:02.741 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:40:02.741 | 99.99th=[47449] 00:40:02.741 bw ( KiB/s): min=20480, max=20920, per=37.90%, avg=20700.00, stdev=311.13, samples=2 00:40:02.741 iops : min= 5120, max= 5230, avg=5175.00, stdev=77.78, samples=2 00:40:02.741 lat (msec) : 4=0.08%, 10=21.62%, 20=75.27%, 50=3.03% 00:40:02.741 cpu : usr=6.69%, sys=10.32%, ctx=291, majf=0, minf=1 00:40:02.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:02.741 issued rwts: total=5120,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:02.741 job3: (groupid=0, jobs=1): err= 0: pid=460451: Sat Dec 7 01:07:18 2024 00:40:02.741 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec) 00:40:02.741 slat (usec): min=3, max=16569, avg=121.02, stdev=939.04 00:40:02.741 clat (usec): min=9606, max=42190, avg=16460.28, stdev=4595.90 00:40:02.741 lat (usec): min=9621, max=42200, avg=16581.31, stdev=4678.89 00:40:02.741 clat percentiles (usec): 00:40:02.742 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[11338], 20.00th=[12518], 00:40:02.742 | 30.00th=[13173], 40.00th=[14353], 50.00th=[15795], 60.00th=[17433], 00:40:02.742 | 70.00th=[18482], 80.00th=[19268], 90.00th=[22938], 95.00th=[24511], 00:40:02.742 | 99.00th=[31065], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:40:02.742 | 99.99th=[42206] 00:40:02.742 write: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1018msec); 0 zone resets 00:40:02.742 slat (usec): min=4, max=14210, avg=133.81, stdev=944.78 00:40:02.742 clat (usec): min=1812, max=123498, avg=18052.78, stdev=17862.91 00:40:02.742 lat (usec): min=1822, max=123506, avg=18186.60, stdev=17978.77 00:40:02.742 clat percentiles (msec): 00:40:02.742 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:40:02.742 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:40:02.742 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 47], 00:40:02.742 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 124], 99.95th=[ 124], 00:40:02.742 | 99.99th=[ 124] 00:40:02.742 bw ( KiB/s): min=12096, max=17904, per=27.47%, avg=15000.00, stdev=4106.88, samples=2 00:40:02.742 iops : min= 3024, max= 4476, avg=3750.00, stdev=1026.72, samples=2 00:40:02.742 lat (msec) : 2=0.17%, 4=0.32%, 10=6.90%, 20=77.42%, 50=12.63% 00:40:02.742 lat (msec) : 100=1.54%, 250=1.02% 00:40:02.742 cpu : usr=6.00%, sys=7.57%, ctx=254, majf=0, minf=1 00:40:02.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:02.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:02.742 issued rwts: total=3584,3877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:02.742 00:40:02.742 Run status group 0 (all jobs): 00:40:02.742 READ: bw=50.6MiB/s (53.1MB/s), 7677KiB/s-19.6MiB/s (7862kB/s-20.6MB/s), io=51.5MiB (54.0MB), run=1002-1018msec 00:40:02.742 WRITE: bw=53.3MiB/s (55.9MB/s), 8159KiB/s-20.3MiB/s (8355kB/s-21.3MB/s), io=54.3MiB (56.9MB), run=1002-1018msec 00:40:02.742 00:40:02.742 Disk stats (read/write): 00:40:02.742 nvme0n1: ios=1567/1591, merge=0/0, ticks=28749/45735, in_queue=74484, util=98.20% 00:40:02.742 nvme0n2: ios=1759/2048, merge=0/0, ticks=25787/39884, in_queue=65671, util=90.96% 00:40:02.742 nvme0n3: ios=4434/4608, merge=0/0, ticks=51680/49725, in_queue=101405, util=89.05% 00:40:02.742 nvme0n4: ios=3535/3584, merge=0/0, ticks=56254/47421, in_queue=103675, util=98.00% 00:40:02.742 01:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:02.742 01:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=460586 00:40:02.742 01:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:02.742 01:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:02.742 [global] 00:40:02.742 thread=1 00:40:02.742 invalidate=1 00:40:02.742 rw=read 00:40:02.742 time_based=1 00:40:02.742 runtime=10 00:40:02.742 ioengine=libaio 00:40:02.742 direct=1 00:40:02.742 bs=4096 00:40:02.742 iodepth=1 00:40:02.742 norandommap=1 00:40:02.742 numjobs=1 00:40:02.742 00:40:02.742 [job0] 00:40:02.742 filename=/dev/nvme0n1 00:40:02.742 [job1] 00:40:02.742 filename=/dev/nvme0n2 00:40:02.742 [job2] 00:40:02.742 filename=/dev/nvme0n3 00:40:02.742 [job3] 00:40:02.742 filename=/dev/nvme0n4 00:40:02.742 Could not set queue depth (nvme0n1) 00:40:02.742 Could not set queue depth (nvme0n2) 00:40:02.742 Could not set queue depth (nvme0n3) 00:40:02.742 Could not set queue depth (nvme0n4) 00:40:02.742 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.742 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.742 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.742 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.742 fio-3.35 00:40:02.742 Starting 4 threads 00:40:06.019 01:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:06.019 01:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:06.019 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2617344, buflen=4096 00:40:06.019 fio: pid=460685, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:06.276 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:06.276 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:06.276 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13443072, buflen=4096 00:40:06.276 fio: pid=460684, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:06.534 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=27512832, buflen=4096 00:40:06.534 fio: pid=460682, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:06.534 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:06.534 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:06.794 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=14663680, buflen=4096 00:40:06.794 fio: pid=460683, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:40:06.794 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:06.794 01:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:06.794 00:40:06.794 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=460682: Sat Dec 7 01:07:22 2024 00:40:06.794 read: IOPS=1920, BW=7681KiB/s (7865kB/s)(26.2MiB/3498msec) 00:40:06.794 slat (usec): min=4, max=13829, avg=12.87, stdev=230.10 00:40:06.794 clat (usec): min=193, max=42020, avg=502.63, stdev=3214.22 00:40:06.794 lat (usec): min=198, max=42034, avg=515.50, stdev=3223.11 00:40:06.794 clat percentiles (usec): 00:40:06.794 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:40:06.794 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:40:06.794 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 322], 00:40:06.794 | 99.00th=[ 515], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:06.794 | 99.99th=[42206] 00:40:06.794 bw ( KiB/s): min= 96, max=14696, per=42.78%, avg=6405.33, stdev=6738.46, samples=6 00:40:06.794 iops : min= 24, max= 3674, avg=1601.33, stdev=1684.61, samples=6 00:40:06.794 lat (usec) : 250=69.02%, 500=29.85%, 750=0.37%, 1000=0.06% 00:40:06.794 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01%, 50=0.63% 00:40:06.794 cpu : usr=0.71%, sys=1.97%, ctx=6725, majf=0, minf=1 00:40:06.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.794 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=460683: Sat Dec 7 01:07:22 2024 00:40:06.794 read: IOPS=942, BW=3769KiB/s (3860kB/s)(14.0MiB/3799msec) 00:40:06.794 slat (usec): min=4, max=16476, avg=18.15, stdev=350.24 00:40:06.794 clat (usec): min=204, max=42145, avg=1040.87, stdev=5543.43 00:40:06.794 lat (usec): min=211, max=58549, avg=1057.04, stdev=5588.21 00:40:06.794 clat percentiles (usec): 00:40:06.794 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 249], 00:40:06.794 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:40:06.794 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 379], 00:40:06.794 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:06.794 | 99.99th=[42206] 00:40:06.794 bw ( KiB/s): min= 88, max=12600, per=26.93%, avg=4032.57, stdev=5858.09, samples=7 00:40:06.794 iops : min= 22, max= 3150, avg=1008.14, stdev=1464.52, samples=7 00:40:06.794 lat (usec) : 250=24.63%, 500=73.14%, 750=0.31%, 1000=0.03% 00:40:06.794 lat (msec) : 2=0.03%, 50=1.84% 00:40:06.794 cpu : usr=0.34%, sys=1.21%, ctx=3585, majf=0, minf=2 00:40:06.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 issued rwts: total=3581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.794 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=460684: Sat Dec 7 01:07:22 2024 00:40:06.794 read: IOPS=1009, BW=4037KiB/s (4134kB/s)(12.8MiB/3252msec) 00:40:06.794 slat (nsec): min=4421, max=50540, avg=9558.63, stdev=6692.16 00:40:06.794 clat (usec): min=196, max=45003, avg=971.86, stdev=5184.76 00:40:06.794 lat (usec): min=203, max=45024, avg=981.42, stdev=5186.02 00:40:06.794 clat percentiles (usec): 00:40:06.794 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:40:06.794 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 293], 00:40:06.794 | 70.00th=[ 322], 80.00th=[ 379], 90.00th=[ 420], 95.00th=[ 490], 00:40:06.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:06.794 | 99.99th=[44827] 00:40:06.794 bw ( KiB/s): min= 96, max=12560, per=29.18%, avg=4368.00, stdev=6120.06, samples=6 00:40:06.794 iops : min= 24, max= 3140, avg=1092.00, stdev=1530.01, samples=6 00:40:06.794 lat (usec) : 250=28.85%, 500=66.40%, 750=3.05%, 1000=0.03% 00:40:06.794 lat (msec) : 50=1.64% 00:40:06.794 cpu : usr=0.31%, sys=1.20%, ctx=3284, majf=0, minf=1 00:40:06.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.794 issued rwts: total=3283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.795 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=460685: Sat Dec 7 01:07:22 2024 00:40:06.795 read: IOPS=218, BW=874KiB/s (895kB/s)(2556KiB/2926msec) 00:40:06.795 slat (nsec): min=4558, max=59508, avg=8754.90, stdev=5541.81 00:40:06.795 clat (usec): min=231, max=41984, avg=4531.40, stdev=12482.00 00:40:06.795 lat (usec): min=238, max=42001, avg=4540.16, stdev=12485.31 00:40:06.795 clat percentiles (usec): 00:40:06.795 | 1.00th=[ 235], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:40:06.795 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:40:06.795 | 70.00th=[ 265], 80.00th=[ 310], 90.00th=[40633], 95.00th=[41157], 00:40:06.795 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:40:06.795 | 99.99th=[42206] 00:40:06.795 bw ( KiB/s): min= 96, max= 2576, per=5.42%, avg=812.80, stdev=1091.47, samples=5 00:40:06.795 iops : min= 24, max= 644, avg=203.20, stdev=272.87, samples=5 00:40:06.795 lat (usec) : 250=44.06%, 500=44.84%, 750=0.47% 00:40:06.795 lat (msec) : 50=10.47% 00:40:06.795 cpu : usr=0.00%, sys=0.31%, ctx=641, majf=0, minf=2 00:40:06.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:06.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.795 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.795 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:06.795 00:40:06.795 Run status group 0 (all jobs): 00:40:06.795 READ: bw=14.6MiB/s (15.3MB/s), 874KiB/s-7681KiB/s (895kB/s-7865kB/s), io=55.5MiB (58.2MB), run=2926-3799msec 00:40:06.795 00:40:06.795 Disk stats (read/write): 00:40:06.795 nvme0n1: ios=6227/0, merge=0/0, ticks=4234/0, in_queue=4234, util=98.45% 00:40:06.795 nvme0n2: ios=3615/0, merge=0/0, ticks=3650/0, in_queue=3650, util=98.77% 00:40:06.795 nvme0n3: ios=3278/0, merge=0/0, ticks=3011/0, in_queue=3011, util=96.79% 00:40:06.795 nvme0n4: ios=516/0, merge=0/0, ticks=2824/0, in_queue=2824, util=96.71% 00:40:07.054 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:07.054 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:07.312 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:07.312 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:07.571 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:07.571 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:07.864 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:07.864 01:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:08.239 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:08.239 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 460586 00:40:08.239 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:08.239 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:08.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:08.510 nvmf hotplug test: fio failed as expected 00:40:08.510 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.511 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.511 rmmod nvme_tcp 00:40:08.772 rmmod nvme_fabrics 00:40:08.772 rmmod nvme_keyring 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 458690 ']' 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 458690 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 458690 ']' 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 458690 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458690 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458690' 00:40:08.772 killing process with pid 458690 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 458690 00:40:08.772 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 458690 00:40:09.049 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.049 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.049 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.050 01:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.957 01:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.957 00:40:10.957 real 0m23.992s 00:40:10.957 user 1m8.648s 00:40:10.957 sys 0m9.530s 00:40:10.957 01:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.957 01:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.957 ************************************ 00:40:10.957 END TEST nvmf_fio_target 00:40:10.957 ************************************ 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:10.957 ************************************ 00:40:10.957 START TEST nvmf_bdevio 00:40:10.957 ************************************ 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:10.957 * Looking for test storage... 00:40:10.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:10.957 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.219 --rc genhtml_branch_coverage=1 00:40:11.219 --rc genhtml_function_coverage=1 00:40:11.219 --rc genhtml_legend=1 00:40:11.219 --rc geninfo_all_blocks=1 00:40:11.219 --rc geninfo_unexecuted_blocks=1 00:40:11.219 00:40:11.219 ' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.219 --rc genhtml_branch_coverage=1 00:40:11.219 --rc genhtml_function_coverage=1 00:40:11.219 --rc genhtml_legend=1 00:40:11.219 --rc geninfo_all_blocks=1 00:40:11.219 --rc geninfo_unexecuted_blocks=1 00:40:11.219 00:40:11.219 ' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.219 --rc genhtml_branch_coverage=1 00:40:11.219 --rc genhtml_function_coverage=1 00:40:11.219 --rc genhtml_legend=1 00:40:11.219 --rc geninfo_all_blocks=1 00:40:11.219 --rc geninfo_unexecuted_blocks=1 00:40:11.219 00:40:11.219 ' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:11.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.219 --rc genhtml_branch_coverage=1 00:40:11.219 --rc genhtml_function_coverage=1 00:40:11.219 --rc genhtml_legend=1 00:40:11.219 --rc geninfo_all_blocks=1 00:40:11.219 --rc geninfo_unexecuted_blocks=1 00:40:11.219 00:40:11.219 ' 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.219 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:11.220 01:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:13.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:13.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:13.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:13.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.751 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:13.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:40:13.752 00:40:13.752 --- 10.0.0.2 ping statistics --- 00:40:13.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.752 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:13.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:40:13.752 00:40:13.752 --- 10.0.0.1 ping statistics --- 00:40:13.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.752 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=463421 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 463421 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 463421 ']' 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:13.752 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:13.752 [2024-12-07 01:07:29.673490] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:13.752 [2024-12-07 01:07:29.674647] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:13.752 [2024-12-07 01:07:29.674720] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.752 [2024-12-07 01:07:29.748579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:13.752 [2024-12-07 01:07:29.798293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.752 [2024-12-07 01:07:29.798351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.752 [2024-12-07 01:07:29.798381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.752 [2024-12-07 01:07:29.798392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.752 [2024-12-07 01:07:29.798402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.752 [2024-12-07 01:07:29.800094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:13.752 [2024-12-07 01:07:29.800144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:13.752 [2024-12-07 01:07:29.800192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:13.752 [2024-12-07 01:07:29.800194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:13.752 [2024-12-07 01:07:29.892526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:13.752 [2024-12-07 01:07:29.892728] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:13.752 [2024-12-07 01:07:29.893053] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:13.752 [2024-12-07 01:07:29.893662] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:13.752 [2024-12-07 01:07:29.893871] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 [2024-12-07 01:07:29.940941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.010 01:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 Malloc0 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:14.010 [2024-12-07 01:07:30.025146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:14.010 { 00:40:14.010 "params": { 00:40:14.010 "name": "Nvme$subsystem", 00:40:14.010 "trtype": "$TEST_TRANSPORT", 00:40:14.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:14.010 "adrfam": "ipv4", 00:40:14.010 "trsvcid": "$NVMF_PORT", 00:40:14.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:14.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:14.010 "hdgst": ${hdgst:-false}, 00:40:14.010 "ddgst": ${ddgst:-false} 00:40:14.010 }, 00:40:14.010 "method": "bdev_nvme_attach_controller" 00:40:14.010 } 00:40:14.010 EOF 00:40:14.010 )") 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:14.010 01:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:14.010 "params": { 00:40:14.011 "name": "Nvme1", 00:40:14.011 "trtype": "tcp", 00:40:14.011 "traddr": "10.0.0.2", 00:40:14.011 "adrfam": "ipv4", 00:40:14.011 "trsvcid": "4420", 00:40:14.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:14.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:14.011 "hdgst": false, 00:40:14.011 "ddgst": false 00:40:14.011 }, 00:40:14.011 "method": "bdev_nvme_attach_controller" 00:40:14.011 }' 00:40:14.011 [2024-12-07 01:07:30.075503] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:14.011 [2024-12-07 01:07:30.075581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463455 ] 00:40:14.011 [2024-12-07 01:07:30.146539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:14.268 [2024-12-07 01:07:30.197418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:14.268 [2024-12-07 01:07:30.197472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:14.268 [2024-12-07 01:07:30.197475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.268 I/O targets: 00:40:14.268 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:14.268 00:40:14.268 00:40:14.268 CUnit - A unit testing framework for C - Version 2.1-3 00:40:14.268 http://cunit.sourceforge.net/ 00:40:14.268 00:40:14.268 00:40:14.268 Suite: bdevio tests on: Nvme1n1 00:40:14.268 Test: blockdev write read block ...passed 00:40:14.526 Test: blockdev write zeroes read block ...passed 00:40:14.526 Test: blockdev write zeroes read no split ...passed 00:40:14.526 Test: blockdev write zeroes read split ...passed 00:40:14.526 Test: blockdev write zeroes read split partial ...passed 00:40:14.526 Test: blockdev reset ...[2024-12-07 01:07:30.474044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:14.526 [2024-12-07 01:07:30.474151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4d700 (9): Bad file descriptor 00:40:14.526 [2024-12-07 01:07:30.478185] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:14.526 passed 00:40:14.526 Test: blockdev write read 8 blocks ...passed 00:40:14.526 Test: blockdev write read size > 128k ...passed 00:40:14.526 Test: blockdev write read invalid size ...passed 00:40:14.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:14.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:14.526 Test: blockdev write read max offset ...passed 00:40:14.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:14.526 Test: blockdev writev readv 8 blocks ...passed 00:40:14.526 Test: blockdev writev readv 30 x 1block ...passed 00:40:14.784 Test: blockdev writev readv block ...passed 00:40:14.784 Test: blockdev writev readv size > 128k ...passed 00:40:14.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:14.784 Test: blockdev comparev and writev ...[2024-12-07 01:07:30.731393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.731429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.731453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.731470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.731871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.731905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.731940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.731969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.732409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.732437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.732467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.732485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.732910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.732932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:14.784 [2024-12-07 01:07:30.732949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:14.784 passed 00:40:14.784 Test: blockdev nvme passthru rw ...passed 00:40:14.784 Test: blockdev nvme passthru vendor specific ...[2024-12-07 01:07:30.815281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:14.784 [2024-12-07 01:07:30.815310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.815456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:14.784 [2024-12-07 01:07:30.815480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:14.784 [2024-12-07 01:07:30.815622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:14.785 [2024-12-07 01:07:30.815645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:14.785 [2024-12-07 01:07:30.815785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:14.785 [2024-12-07 01:07:30.815808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:14.785 passed 00:40:14.785 Test: blockdev nvme admin passthru ...passed 00:40:14.785 Test: blockdev copy ...passed 00:40:14.785 00:40:14.785 Run Summary: Type Total Ran Passed Failed Inactive 00:40:14.785 suites 1 1 n/a 0 0 00:40:14.785 tests 23 23 23 0 0 00:40:14.785 asserts 152 152 152 0 n/a 00:40:14.785 00:40:14.785 Elapsed time = 1.017 seconds 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:15.042 rmmod nvme_tcp 00:40:15.042 rmmod nvme_fabrics 00:40:15.042 rmmod nvme_keyring 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 463421 ']' 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 463421 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 463421 ']' 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 463421 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463421 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463421' 00:40:15.042 killing process with pid 463421 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 463421 00:40:15.042 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 463421 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.300 01:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.839 00:40:17.839 real 0m6.351s 00:40:17.839 user 0m7.409s 00:40:17.839 sys 0m2.512s 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:17.839 ************************************ 00:40:17.839 END TEST nvmf_bdevio 00:40:17.839 ************************************ 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:17.839 00:40:17.839 real 3m55.476s 00:40:17.839 user 8m52.872s 00:40:17.839 sys 1m24.512s 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.839 01:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:17.839 ************************************ 00:40:17.839 END TEST nvmf_target_core_interrupt_mode 00:40:17.839 ************************************ 00:40:17.839 01:07:33 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:17.839 01:07:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:17.839 01:07:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.839 01:07:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:17.839 ************************************ 00:40:17.839 START TEST nvmf_interrupt 00:40:17.839 ************************************ 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:17.839 * Looking for test storage... 00:40:17.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.839 --rc genhtml_branch_coverage=1 00:40:17.839 --rc genhtml_function_coverage=1 00:40:17.839 --rc genhtml_legend=1 00:40:17.839 --rc geninfo_all_blocks=1 00:40:17.839 --rc geninfo_unexecuted_blocks=1 00:40:17.839 00:40:17.839 ' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.839 --rc genhtml_branch_coverage=1 00:40:17.839 --rc genhtml_function_coverage=1 00:40:17.839 --rc genhtml_legend=1 00:40:17.839 --rc geninfo_all_blocks=1 00:40:17.839 --rc geninfo_unexecuted_blocks=1 00:40:17.839 00:40:17.839 ' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.839 --rc genhtml_branch_coverage=1 00:40:17.839 --rc genhtml_function_coverage=1 00:40:17.839 --rc genhtml_legend=1 00:40:17.839 --rc geninfo_all_blocks=1 00:40:17.839 --rc geninfo_unexecuted_blocks=1 00:40:17.839 00:40:17.839 ' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:17.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.839 --rc genhtml_branch_coverage=1 00:40:17.839 --rc genhtml_function_coverage=1 00:40:17.839 --rc genhtml_legend=1 00:40:17.839 --rc geninfo_all_blocks=1 00:40:17.839 --rc geninfo_unexecuted_blocks=1 00:40:17.839 00:40:17.839 ' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.839 01:07:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.840 01:07:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:19.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:19.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:19.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:19.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:40:19.750 00:40:19.750 --- 10.0.0.2 ping statistics --- 00:40:19.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.750 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:40:19.750 00:40:19.750 --- 10.0.0.1 ping statistics --- 00:40:19.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.750 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.750 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=465537 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 465537 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 465537 ']' 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.751 01:07:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.011 [2024-12-07 01:07:35.898650] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:20.011 [2024-12-07 01:07:35.899760] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:20.011 [2024-12-07 01:07:35.899823] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.011 [2024-12-07 01:07:35.971548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:20.011 [2024-12-07 01:07:36.015748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.011 [2024-12-07 01:07:36.015800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.011 [2024-12-07 01:07:36.015828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.011 [2024-12-07 01:07:36.015839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.011 [2024-12-07 01:07:36.015848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.011 [2024-12-07 01:07:36.017129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.011 [2024-12-07 01:07:36.017135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.011 [2024-12-07 01:07:36.101798] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.011 [2024-12-07 01:07:36.101833] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:20.011 [2024-12-07 01:07:36.102084] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:20.011 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:20.270 5000+0 records in 00:40:20.270 5000+0 records out 00:40:20.270 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0138719 s, 738 MB/s 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.270 AIO0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.270 [2024-12-07 01:07:36.205782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:20.270 [2024-12-07 01:07:36.234110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 465537 0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 0 idle 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465537 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465537 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:20.270 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:20.530 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 465537 1 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 1 idle 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465541 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465541 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=465696 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 465537 0 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 465537 0 busy 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:20.531 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465537 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.46 reactor_0' 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465537 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.46 reactor_0 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 465537 1 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 465537 1 busy 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465541 root 20 0 128.2g 48384 34560 R 93.3 0.1 0:00.26 reactor_1' 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465541 root 20 0 128.2g 48384 34560 R 93.3 0.1 0:00.26 reactor_1 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:20.791 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:20.792 01:07:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 465696 00:40:30.772 Initializing NVMe Controllers 00:40:30.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:30.772 Controller IO queue size 256, less than required. 00:40:30.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:30.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:30.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:30.772 Initialization complete. Launching workers. 00:40:30.772 ======================================================== 00:40:30.772 Latency(us) 00:40:30.772 Device Information : IOPS MiB/s Average min max 00:40:30.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13904.80 54.32 18422.45 4334.72 22287.02 00:40:30.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13403.10 52.36 19117.34 4159.30 60462.43 00:40:30.772 ======================================================== 00:40:30.772 Total : 27307.89 106.67 18763.51 4159.30 60462.43 00:40:30.772 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 465537 0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 0 idle 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465537 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0' 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465537 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 465537 1 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 1 idle 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:30.772 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:31.030 01:07:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465541 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.97 reactor_1' 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465541 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.97 reactor_1 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:31.030 01:07:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:31.286 01:07:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:31.286 01:07:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:31.286 01:07:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:31.286 01:07:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:31.286 01:07:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 465537 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 0 idle 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465537 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.30 reactor_0' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465537 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.30 reactor_0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 465537 1 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 465537 1 idle 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=465537 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 465537 -w 256 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 465541 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 465541 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:33.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:33.821 rmmod nvme_tcp 00:40:33.821 rmmod nvme_fabrics 00:40:33.821 rmmod nvme_keyring 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:33.821 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 465537 ']' 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 465537 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 465537 ']' 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 465537 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465537 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465537' 00:40:33.822 killing process with pid 465537 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 465537 00:40:33.822 01:07:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 465537 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:34.082 01:07:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.620 01:07:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:36.620 00:40:36.620 real 0m18.762s 00:40:36.620 user 0m36.818s 00:40:36.620 sys 0m6.643s 00:40:36.620 01:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:36.620 01:07:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:36.620 ************************************ 00:40:36.620 END TEST nvmf_interrupt 00:40:36.620 ************************************ 00:40:36.620 00:40:36.620 real 32m58.216s 00:40:36.620 user 87m15.147s 00:40:36.620 sys 7m58.609s 00:40:36.620 01:07:52 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:36.620 01:07:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:36.620 ************************************ 00:40:36.620 END TEST nvmf_tcp 00:40:36.620 ************************************ 00:40:36.620 01:07:52 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:36.621 01:07:52 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:36.621 01:07:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:36.621 01:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:36.621 01:07:52 -- common/autotest_common.sh@10 -- # set +x 00:40:36.621 ************************************ 00:40:36.621 START TEST spdkcli_nvmf_tcp 00:40:36.621 ************************************ 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:36.621 * Looking for test storage... 00:40:36.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.621 --rc genhtml_branch_coverage=1 00:40:36.621 --rc genhtml_function_coverage=1 00:40:36.621 --rc genhtml_legend=1 00:40:36.621 --rc geninfo_all_blocks=1 00:40:36.621 --rc geninfo_unexecuted_blocks=1 00:40:36.621 00:40:36.621 ' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.621 --rc genhtml_branch_coverage=1 00:40:36.621 --rc genhtml_function_coverage=1 00:40:36.621 --rc genhtml_legend=1 00:40:36.621 --rc geninfo_all_blocks=1 00:40:36.621 --rc geninfo_unexecuted_blocks=1 00:40:36.621 00:40:36.621 ' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.621 --rc genhtml_branch_coverage=1 00:40:36.621 --rc genhtml_function_coverage=1 00:40:36.621 --rc genhtml_legend=1 00:40:36.621 --rc geninfo_all_blocks=1 00:40:36.621 --rc geninfo_unexecuted_blocks=1 00:40:36.621 00:40:36.621 ' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:36.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:36.621 --rc genhtml_branch_coverage=1 00:40:36.621 --rc genhtml_function_coverage=1 00:40:36.621 --rc genhtml_legend=1 00:40:36.621 --rc geninfo_all_blocks=1 00:40:36.621 --rc geninfo_unexecuted_blocks=1 00:40:36.621 00:40:36.621 ' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:36.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=467702 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:36.621 01:07:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 467702 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 467702 ']' 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:36.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:36.622 [2024-12-07 01:07:52.513324] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:36.622 [2024-12-07 01:07:52.513409] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467702 ] 00:40:36.622 [2024-12-07 01:07:52.579304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:36.622 [2024-12-07 01:07:52.624612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.622 [2024-12-07 01:07:52.624616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:36.622 01:07:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:36.622 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:36.622 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:36.622 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:36.622 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:36.622 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:36.622 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:36.622 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:36.622 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:36.622 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:36.622 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:36.622 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:36.622 ' 00:40:39.910 [2024-12-07 01:07:55.435675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.844 [2024-12-07 01:07:56.704017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:43.381 [2024-12-07 01:07:59.047058] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:45.289 [2024-12-07 01:08:01.073559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:46.671 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:46.671 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:46.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:46.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:46.671 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:46.671 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:46.671 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:46.671 01:08:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:47.238 01:08:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:47.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:47.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:47.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:47.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:47.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:47.238 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:47.238 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:47.238 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:47.238 ' 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:52.513 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:52.513 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:52.513 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:52.513 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 467702 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 467702 ']' 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 467702 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.513 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467702 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467702' 00:40:52.772 killing process with pid 467702 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 467702 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 467702 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 467702 ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 467702 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 467702 ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 467702 00:40:52.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (467702) - No such process 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 467702 is not found' 00:40:52.772 Process with pid 467702 is not found 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:52.772 00:40:52.772 real 0m16.586s 00:40:52.772 user 0m35.275s 00:40:52.772 sys 0m0.860s 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.772 01:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:52.772 ************************************ 00:40:52.772 END TEST spdkcli_nvmf_tcp 00:40:52.772 ************************************ 00:40:52.772 01:08:08 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:52.772 01:08:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:52.772 01:08:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.772 01:08:08 -- common/autotest_common.sh@10 -- # set +x 00:40:53.031 ************************************ 00:40:53.031 START TEST nvmf_identify_passthru 00:40:53.031 ************************************ 00:40:53.031 01:08:08 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:53.031 * Looking for test storage... 00:40:53.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:53.031 01:08:08 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:53.031 01:08:08 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:40:53.031 01:08:08 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.031 --rc genhtml_branch_coverage=1 00:40:53.031 --rc genhtml_function_coverage=1 00:40:53.031 --rc genhtml_legend=1 00:40:53.031 --rc geninfo_all_blocks=1 00:40:53.031 --rc geninfo_unexecuted_blocks=1 00:40:53.031 00:40:53.031 ' 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.031 --rc genhtml_branch_coverage=1 00:40:53.031 --rc genhtml_function_coverage=1 00:40:53.031 --rc genhtml_legend=1 00:40:53.031 --rc geninfo_all_blocks=1 00:40:53.031 --rc geninfo_unexecuted_blocks=1 00:40:53.031 00:40:53.031 ' 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.031 --rc genhtml_branch_coverage=1 00:40:53.031 --rc genhtml_function_coverage=1 00:40:53.031 --rc genhtml_legend=1 00:40:53.031 --rc geninfo_all_blocks=1 00:40:53.031 --rc geninfo_unexecuted_blocks=1 00:40:53.031 00:40:53.031 ' 00:40:53.031 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:53.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.031 --rc genhtml_branch_coverage=1 00:40:53.031 --rc genhtml_function_coverage=1 00:40:53.031 --rc genhtml_legend=1 00:40:53.031 --rc geninfo_all_blocks=1 00:40:53.031 --rc geninfo_unexecuted_blocks=1 00:40:53.031 00:40:53.031 ' 00:40:53.031 01:08:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:53.031 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:53.031 01:08:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:53.031 01:08:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.031 01:08:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:53.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:53.032 01:08:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:53.032 01:08:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:53.032 01:08:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:53.032 01:08:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:53.032 01:08:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:53.032 01:08:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.032 01:08:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.032 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:53.032 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:53.032 01:08:09 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:53.032 01:08:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:55.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:55.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:55.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:55.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:55.565 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:55.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:55.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:40:55.566 00:40:55.566 --- 10.0.0.2 ping statistics --- 00:40:55.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.566 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:55.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:55.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:40:55.566 00:40:55.566 --- 10.0.0.1 ping statistics --- 00:40:55.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.566 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:55.566 01:08:11 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:55.566 01:08:11 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:55.566 01:08:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:59.765 01:08:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:59.765 01:08:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:59.765 01:08:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:59.765 01:08:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:03.965 01:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:03.965 01:08:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:03.965 01:08:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:03.965 01:08:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:03.965 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:03.965 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=472327 00:41:03.965 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:03.965 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:03.965 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 472327 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 472327 ']' 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:03.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:03.965 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:03.965 [2024-12-07 01:08:20.077558] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:03.965 [2024-12-07 01:08:20.077648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:04.226 [2024-12-07 01:08:20.151752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:04.226 [2024-12-07 01:08:20.199505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:04.226 [2024-12-07 01:08:20.199561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:04.226 [2024-12-07 01:08:20.199588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:04.226 [2024-12-07 01:08:20.199599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:04.226 [2024-12-07 01:08:20.199608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:04.226 [2024-12-07 01:08:20.202510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:04.226 [2024-12-07 01:08:20.202611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.226 [2024-12-07 01:08:20.202680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:04.226 [2024-12-07 01:08:20.202683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:04.226 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:04.226 INFO: Log level set to 20 00:41:04.226 INFO: Requests: 00:41:04.226 { 00:41:04.226 "jsonrpc": "2.0", 00:41:04.226 "method": "nvmf_set_config", 00:41:04.226 "id": 1, 00:41:04.226 "params": { 00:41:04.226 "admin_cmd_passthru": { 00:41:04.226 "identify_ctrlr": true 00:41:04.226 } 00:41:04.226 } 00:41:04.226 } 00:41:04.226 00:41:04.226 INFO: response: 00:41:04.226 { 00:41:04.226 "jsonrpc": "2.0", 00:41:04.226 "id": 1, 00:41:04.226 "result": true 00:41:04.226 } 00:41:04.226 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.226 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.226 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:04.226 INFO: Setting log level to 20 00:41:04.226 INFO: Setting log level to 20 00:41:04.226 INFO: Log level set to 20 00:41:04.226 INFO: Log level set to 20 00:41:04.226 INFO: Requests: 00:41:04.226 { 00:41:04.226 "jsonrpc": "2.0", 00:41:04.226 "method": "framework_start_init", 00:41:04.226 "id": 1 00:41:04.226 } 00:41:04.226 00:41:04.226 INFO: Requests: 00:41:04.226 { 00:41:04.226 "jsonrpc": "2.0", 00:41:04.226 "method": "framework_start_init", 00:41:04.226 "id": 1 00:41:04.226 } 00:41:04.226 00:41:04.487 [2024-12-07 01:08:20.416271] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:04.487 INFO: response: 00:41:04.487 { 00:41:04.487 "jsonrpc": "2.0", 00:41:04.487 "id": 1, 00:41:04.487 "result": true 00:41:04.487 } 00:41:04.487 00:41:04.487 INFO: response: 00:41:04.487 { 00:41:04.487 "jsonrpc": "2.0", 00:41:04.487 "id": 1, 00:41:04.487 "result": true 00:41:04.487 } 00:41:04.487 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.487 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:04.487 INFO: Setting log level to 40 00:41:04.487 INFO: Setting log level to 40 00:41:04.487 INFO: Setting log level to 40 00:41:04.487 [2024-12-07 01:08:20.426478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.487 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:04.487 01:08:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.487 01:08:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 Nvme0n1 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 [2024-12-07 01:08:23.332415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 [ 00:41:07.783 { 00:41:07.783 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:07.783 "subtype": "Discovery", 00:41:07.783 "listen_addresses": [], 00:41:07.783 "allow_any_host": true, 00:41:07.783 "hosts": [] 00:41:07.783 }, 00:41:07.783 { 00:41:07.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.783 "subtype": "NVMe", 00:41:07.783 "listen_addresses": [ 00:41:07.783 { 00:41:07.783 "trtype": "TCP", 00:41:07.783 "adrfam": "IPv4", 00:41:07.783 "traddr": "10.0.0.2", 00:41:07.783 "trsvcid": "4420" 00:41:07.783 } 00:41:07.783 ], 00:41:07.783 "allow_any_host": true, 00:41:07.783 "hosts": [], 00:41:07.783 "serial_number": "SPDK00000000000001", 00:41:07.783 "model_number": "SPDK bdev Controller", 00:41:07.783 "max_namespaces": 1, 00:41:07.783 "min_cntlid": 1, 00:41:07.783 "max_cntlid": 65519, 00:41:07.783 "namespaces": [ 00:41:07.783 { 00:41:07.783 "nsid": 1, 00:41:07.783 "bdev_name": "Nvme0n1", 00:41:07.783 "name": "Nvme0n1", 00:41:07.783 "nguid": "DBC6CBFFD77240C3B371FB59E1865D10", 00:41:07.783 "uuid": "dbc6cbff-d772-40c3-b371-fb59e1865d10" 00:41:07.783 } 00:41:07.783 ] 00:41:07.783 } 00:41:07.783 ] 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:07.783 01:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:07.783 rmmod nvme_tcp 00:41:07.783 rmmod nvme_fabrics 00:41:07.783 rmmod nvme_keyring 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 472327 ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 472327 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 472327 ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 472327 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 472327 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 472327' 00:41:07.783 killing process with pid 472327 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 472327 00:41:07.783 01:08:23 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 472327 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:09.688 01:08:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:09.688 01:08:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:09.688 01:08:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.613 01:08:27 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:11.613 00:41:11.613 real 0m18.533s 00:41:11.613 user 0m27.596s 00:41:11.613 sys 0m2.478s 00:41:11.613 01:08:27 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.613 01:08:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:11.613 ************************************ 00:41:11.613 END TEST nvmf_identify_passthru 00:41:11.613 ************************************ 00:41:11.613 01:08:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:11.613 01:08:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:11.613 01:08:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.613 01:08:27 -- common/autotest_common.sh@10 -- # set +x 00:41:11.613 ************************************ 00:41:11.613 START TEST nvmf_dif 00:41:11.613 ************************************ 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:11.613 * Looking for test storage... 00:41:11.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:11.613 01:08:27 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:11.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.613 --rc genhtml_branch_coverage=1 00:41:11.613 --rc genhtml_function_coverage=1 00:41:11.613 --rc genhtml_legend=1 00:41:11.613 --rc geninfo_all_blocks=1 00:41:11.613 --rc geninfo_unexecuted_blocks=1 00:41:11.613 00:41:11.613 ' 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:11.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.613 --rc genhtml_branch_coverage=1 00:41:11.613 --rc genhtml_function_coverage=1 00:41:11.613 --rc genhtml_legend=1 00:41:11.613 --rc geninfo_all_blocks=1 00:41:11.613 --rc geninfo_unexecuted_blocks=1 00:41:11.613 00:41:11.613 ' 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:11.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.613 --rc genhtml_branch_coverage=1 00:41:11.613 --rc genhtml_function_coverage=1 00:41:11.613 --rc genhtml_legend=1 00:41:11.613 --rc geninfo_all_blocks=1 00:41:11.613 --rc geninfo_unexecuted_blocks=1 00:41:11.613 00:41:11.613 ' 00:41:11.613 01:08:27 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:11.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.613 --rc genhtml_branch_coverage=1 00:41:11.613 --rc genhtml_function_coverage=1 00:41:11.613 --rc genhtml_legend=1 00:41:11.613 --rc geninfo_all_blocks=1 00:41:11.613 --rc geninfo_unexecuted_blocks=1 00:41:11.613 00:41:11.613 ' 00:41:11.613 01:08:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:11.613 01:08:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:11.613 01:08:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:11.614 01:08:27 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:11.614 01:08:27 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.614 01:08:27 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.614 01:08:27 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.614 01:08:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.614 01:08:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.614 01:08:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.614 01:08:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:11.614 01:08:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:11.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:11.614 01:08:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:11.614 01:08:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:11.614 01:08:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:11.614 01:08:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:11.614 01:08:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.614 01:08:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:11.614 01:08:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:11.614 01:08:27 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:11.614 01:08:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:14.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:14.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:14.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:14.144 01:08:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:14.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:14.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:14.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:41:14.145 00:41:14.145 --- 10.0.0.2 ping statistics --- 00:41:14.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.145 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:14.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:14.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:41:14.145 00:41:14.145 --- 10.0.0.1 ping statistics --- 00:41:14.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.145 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:41:14.145 01:08:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:14.145 01:08:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:14.145 01:08:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:14.145 01:08:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:15.082 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:15.082 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:15.082 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:15.082 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:15.082 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:15.082 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:15.082 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:15.082 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:15.082 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:15.082 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:15.082 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:15.082 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:15.082 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:15.082 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:15.082 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:15.082 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:15.082 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:15.341 01:08:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:15.341 01:08:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:15.341 01:08:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:15.341 01:08:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.342 01:08:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=475597 00:41:15.342 01:08:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:15.342 01:08:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 475597 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 475597 ']' 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:15.342 01:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.342 [2024-12-07 01:08:31.342067] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:15.342 [2024-12-07 01:08:31.342153] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.342 [2024-12-07 01:08:31.413699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.342 [2024-12-07 01:08:31.457115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.342 [2024-12-07 01:08:31.457172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.342 [2024-12-07 01:08:31.457199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.342 [2024-12-07 01:08:31.457210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.342 [2024-12-07 01:08:31.457220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.342 [2024-12-07 01:08:31.457769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:15.604 01:08:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 01:08:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:15.604 01:08:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:15.604 01:08:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 [2024-12-07 01:08:31.588910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.604 01:08:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 ************************************ 00:41:15.604 START TEST fio_dif_1_default 00:41:15.604 ************************************ 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 bdev_null0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:15.604 [2024-12-07 01:08:31.645232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.604 { 00:41:15.604 "params": { 00:41:15.604 "name": "Nvme$subsystem", 00:41:15.604 "trtype": "$TEST_TRANSPORT", 00:41:15.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.604 "adrfam": "ipv4", 00:41:15.604 "trsvcid": "$NVMF_PORT", 00:41:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.604 "hdgst": ${hdgst:-false}, 00:41:15.604 "ddgst": ${ddgst:-false} 00:41:15.604 }, 00:41:15.604 "method": "bdev_nvme_attach_controller" 00:41:15.604 } 00:41:15.604 EOF 00:41:15.604 )") 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:15.604 01:08:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:15.604 "params": { 00:41:15.604 "name": "Nvme0", 00:41:15.604 "trtype": "tcp", 00:41:15.604 "traddr": "10.0.0.2", 00:41:15.604 "adrfam": "ipv4", 00:41:15.604 "trsvcid": "4420", 00:41:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:15.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:15.605 "hdgst": false, 00:41:15.605 "ddgst": false 00:41:15.605 }, 00:41:15.605 "method": "bdev_nvme_attach_controller" 00:41:15.605 }' 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:15.605 01:08:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.866 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:15.866 fio-3.35 00:41:15.866 Starting 1 thread 00:41:28.087 00:41:28.087 filename0: (groupid=0, jobs=1): err= 0: pid=475823: Sat Dec 7 01:08:42 2024 00:41:28.087 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10012msec) 00:41:28.087 slat (nsec): min=4246, max=60571, avg=9445.35, stdev=2979.36 00:41:28.087 clat (usec): min=657, max=46371, avg=40833.06, stdev=2595.89 00:41:28.087 lat (usec): min=665, max=46384, avg=40842.51, stdev=2595.65 00:41:28.087 clat percentiles (usec): 00:41:28.087 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:28.087 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:28.087 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:28.087 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:41:28.087 | 99.99th=[46400] 00:41:28.087 bw ( KiB/s): min= 384, max= 416, per=99.61%, avg=390.40, stdev=13.13, samples=20 00:41:28.087 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:41:28.087 lat (usec) : 750=0.41% 00:41:28.087 lat (msec) : 50=99.59% 00:41:28.087 cpu : usr=90.90%, sys=8.82%, ctx=15, majf=0, minf=240 00:41:28.087 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:28.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.087 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.087 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:28.087 00:41:28.087 Run status group 0 (all jobs): 00:41:28.087 READ: bw=392KiB/s (401kB/s), 392KiB/s-392KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10012-10012msec 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.087 00:41:28.087 real 0m11.048s 00:41:28.087 user 0m10.223s 00:41:28.087 sys 0m1.153s 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:28.087 ************************************ 00:41:28.087 END TEST fio_dif_1_default 00:41:28.087 ************************************ 00:41:28.087 01:08:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:28.087 01:08:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:28.087 01:08:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:28.087 01:08:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:28.087 ************************************ 00:41:28.087 START TEST fio_dif_1_multi_subsystems 00:41:28.087 ************************************ 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:28.087 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 bdev_null0 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 [2024-12-07 01:08:42.730880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 bdev_null1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.088 { 00:41:28.088 "params": { 00:41:28.088 "name": "Nvme$subsystem", 00:41:28.088 "trtype": "$TEST_TRANSPORT", 00:41:28.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.088 "adrfam": "ipv4", 00:41:28.088 "trsvcid": "$NVMF_PORT", 00:41:28.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.088 "hdgst": ${hdgst:-false}, 00:41:28.088 "ddgst": ${ddgst:-false} 00:41:28.088 }, 00:41:28.088 "method": "bdev_nvme_attach_controller" 00:41:28.088 } 00:41:28.088 EOF 00:41:28.088 )") 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.088 { 00:41:28.088 "params": { 00:41:28.088 "name": "Nvme$subsystem", 00:41:28.088 "trtype": "$TEST_TRANSPORT", 00:41:28.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.088 "adrfam": "ipv4", 00:41:28.088 "trsvcid": "$NVMF_PORT", 00:41:28.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.088 "hdgst": ${hdgst:-false}, 00:41:28.088 "ddgst": ${ddgst:-false} 00:41:28.088 }, 00:41:28.088 "method": "bdev_nvme_attach_controller" 00:41:28.088 } 00:41:28.088 EOF 00:41:28.088 )") 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:28.088 "params": { 00:41:28.088 "name": "Nvme0", 00:41:28.088 "trtype": "tcp", 00:41:28.088 "traddr": "10.0.0.2", 00:41:28.088 "adrfam": "ipv4", 00:41:28.088 "trsvcid": "4420", 00:41:28.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:28.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:28.088 "hdgst": false, 00:41:28.088 "ddgst": false 00:41:28.088 }, 00:41:28.088 "method": "bdev_nvme_attach_controller" 00:41:28.088 },{ 00:41:28.088 "params": { 00:41:28.088 "name": "Nvme1", 00:41:28.088 "trtype": "tcp", 00:41:28.088 "traddr": "10.0.0.2", 00:41:28.088 "adrfam": "ipv4", 00:41:28.088 "trsvcid": "4420", 00:41:28.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.088 "hdgst": false, 00:41:28.088 "ddgst": false 00:41:28.088 }, 00:41:28.088 "method": "bdev_nvme_attach_controller" 00:41:28.088 }' 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:28.088 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:28.089 01:08:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.089 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:28.089 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:28.089 fio-3.35 00:41:28.089 Starting 2 threads 00:41:38.293 00:41:38.293 filename0: (groupid=0, jobs=1): err= 0: pid=477107: Sat Dec 7 01:08:53 2024 00:41:38.293 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:41:38.293 slat (nsec): min=5431, max=65238, avg=10616.31, stdev=3830.27 00:41:38.293 clat (usec): min=917, max=46878, avg=41048.45, stdev=2636.41 00:41:38.293 lat (usec): min=925, max=46891, avg=41059.06, stdev=2636.60 00:41:38.293 clat percentiles (usec): 00:41:38.293 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:38.293 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:38.293 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:41:38.294 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:41:38.294 | 99.99th=[46924] 00:41:38.294 bw ( KiB/s): min= 352, max= 416, per=49.83%, avg=388.80, stdev=15.66, samples=20 00:41:38.294 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:41:38.294 lat (usec) : 1000=0.41% 00:41:38.294 lat (msec) : 50=99.59% 00:41:38.294 cpu : usr=94.81%, sys=4.88%, ctx=37, majf=0, minf=136 00:41:38.294 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.294 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.294 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:38.294 filename1: (groupid=0, jobs=1): err= 0: pid=477108: Sat Dec 7 01:08:53 2024 00:41:38.294 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10027msec) 00:41:38.294 slat (nsec): min=7305, max=65238, avg=10459.40, stdev=3641.57 00:41:38.294 clat (usec): min=515, max=46870, avg=41061.26, stdev=5238.36 00:41:38.294 lat (usec): min=523, max=46885, avg=41071.72, stdev=5238.27 00:41:38.294 clat percentiles (usec): 00:41:38.294 | 1.00th=[ 873], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:38.294 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:41:38.294 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:38.294 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:41:38.294 | 99.99th=[46924] 00:41:38.294 bw ( KiB/s): min= 352, max= 416, per=49.83%, avg=388.80, stdev=15.66, samples=20 00:41:38.294 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:41:38.294 lat (usec) : 750=0.82%, 1000=0.82% 00:41:38.294 lat (msec) : 50=98.36% 00:41:38.294 cpu : usr=94.99%, sys=4.64%, ctx=33, majf=0, minf=129 00:41:38.294 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.294 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.294 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:38.294 00:41:38.294 Run status group 0 (all jobs): 00:41:38.294 READ: bw=779KiB/s (797kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10024-10027msec 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 00:41:38.294 real 0m11.179s 00:41:38.294 user 0m20.069s 00:41:38.294 sys 0m1.233s 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 ************************************ 00:41:38.294 END TEST fio_dif_1_multi_subsystems 00:41:38.294 ************************************ 00:41:38.294 01:08:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:38.294 01:08:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:38.294 01:08:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 ************************************ 00:41:38.294 START TEST fio_dif_rand_params 00:41:38.294 ************************************ 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 bdev_null0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.294 [2024-12-07 01:08:53.966890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:38.294 { 00:41:38.294 "params": { 00:41:38.294 "name": "Nvme$subsystem", 00:41:38.294 "trtype": "$TEST_TRANSPORT", 00:41:38.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.294 "adrfam": "ipv4", 00:41:38.294 "trsvcid": "$NVMF_PORT", 00:41:38.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.294 "hdgst": ${hdgst:-false}, 00:41:38.294 "ddgst": ${ddgst:-false} 00:41:38.294 }, 00:41:38.294 "method": "bdev_nvme_attach_controller" 00:41:38.294 } 00:41:38.294 EOF 00:41:38.294 )") 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:38.294 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:38.295 "params": { 00:41:38.295 "name": "Nvme0", 00:41:38.295 "trtype": "tcp", 00:41:38.295 "traddr": "10.0.0.2", 00:41:38.295 "adrfam": "ipv4", 00:41:38.295 "trsvcid": "4420", 00:41:38.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:38.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:38.295 "hdgst": false, 00:41:38.295 "ddgst": false 00:41:38.295 }, 00:41:38.295 "method": "bdev_nvme_attach_controller" 00:41:38.295 }' 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:38.295 01:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:38.295 01:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:38.295 01:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:38.295 01:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:38.295 01:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.295 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:38.295 ... 00:41:38.295 fio-3.35 00:41:38.295 Starting 3 threads 00:41:44.851 00:41:44.851 filename0: (groupid=0, jobs=1): err= 0: pid=478508: Sat Dec 7 01:08:59 2024 00:41:44.851 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5044msec) 00:41:44.851 slat (nsec): min=4863, max=41218, avg=15173.13, stdev=4372.59 00:41:44.851 clat (usec): min=7084, max=62979, avg=13044.21, stdev=4542.41 00:41:44.851 lat (usec): min=7096, max=62990, avg=13059.39, stdev=4542.41 00:41:44.851 clat percentiles (usec): 00:41:44.851 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11076], 00:41:44.851 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13173], 00:41:44.851 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15270], 95.00th=[15795], 00:41:44.851 | 99.00th=[18482], 99.50th=[53216], 99.90th=[61604], 99.95th=[63177], 00:41:44.851 | 99.99th=[63177] 00:41:44.851 bw ( KiB/s): min=26368, max=32512, per=34.18%, avg=29516.80, stdev=1698.33, samples=10 00:41:44.851 iops : min= 206, max= 254, avg=230.60, stdev=13.27, samples=10 00:41:44.851 lat (msec) : 10=8.40%, 20=90.65%, 50=0.09%, 100=0.87% 00:41:44.851 cpu : usr=93.91%, sys=5.57%, ctx=18, majf=0, minf=97 00:41:44.851 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:44.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.851 issued rwts: total=1155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:44.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:44.851 filename0: (groupid=0, jobs=1): err= 0: pid=478509: Sat Dec 7 01:08:59 2024 00:41:44.851 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5006msec) 00:41:44.851 slat (nsec): min=4536, max=39426, avg=18098.57, stdev=5251.98 00:41:44.851 clat (usec): min=5534, max=55222, avg=13468.04, stdev=5720.40 00:41:44.851 lat (usec): min=5566, max=55235, avg=13486.14, stdev=5720.06 00:41:44.851 clat percentiles (usec): 00:41:44.851 | 1.00th=[ 8160], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11207], 00:41:44.851 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:41:44.851 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15270], 95.00th=[16188], 00:41:44.851 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54264], 99.95th=[55313], 00:41:44.851 | 99.99th=[55313] 00:41:44.851 bw ( KiB/s): min=22784, max=30976, per=32.91%, avg=28416.00, stdev=2464.35, samples=10 00:41:44.851 iops : min= 178, max= 242, avg=222.00, stdev=19.25, samples=10 00:41:44.851 lat (msec) : 10=5.75%, 20=92.36%, 100=1.89% 00:41:44.851 cpu : usr=94.21%, sys=5.25%, ctx=13, majf=0, minf=98 00:41:44.851 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:44.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.851 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:44.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:44.852 filename0: (groupid=0, jobs=1): err= 0: pid=478510: Sat Dec 7 01:08:59 2024 00:41:44.852 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(142MiB/5044msec) 00:41:44.852 slat (nsec): min=4647, max=56853, avg=15258.57, stdev=4266.82 00:41:44.852 clat (usec): min=7180, max=46989, avg=13277.56, stdev=2546.72 00:41:44.852 lat (usec): min=7199, max=47003, avg=13292.82, stdev=2546.93 00:41:44.852 clat percentiles (usec): 00:41:44.852 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11469], 00:41:44.852 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13435], 60.00th=[13960], 00:41:44.852 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15795], 95.00th=[16450], 00:41:44.852 | 99.00th=[17695], 99.50th=[17957], 99.90th=[45876], 99.95th=[46924], 00:41:44.852 | 99.99th=[46924] 00:41:44.852 bw ( KiB/s): min=27648, max=31232, per=33.56%, avg=28979.20, stdev=1281.71, samples=10 00:41:44.852 iops : min= 216, max= 244, avg=226.40, stdev=10.01, samples=10 00:41:44.852 lat (msec) : 10=8.28%, 20=91.54%, 50=0.18% 00:41:44.852 cpu : usr=94.49%, sys=5.00%, ctx=10, majf=0, minf=145 00:41:44.852 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:44.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:44.852 issued rwts: total=1135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:44.852 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:44.852 00:41:44.852 Run status group 0 (all jobs): 00:41:44.852 READ: bw=84.3MiB/s (88.4MB/s), 27.8MiB/s-28.6MiB/s (29.1MB/s-30.0MB/s), io=425MiB (446MB), run=5006-5044msec 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 bdev_null0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 [2024-12-07 01:09:00.132666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 bdev_null1 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 bdev_null2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:44.852 { 00:41:44.852 "params": { 00:41:44.852 "name": "Nvme$subsystem", 00:41:44.852 "trtype": "$TEST_TRANSPORT", 00:41:44.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:44.852 "adrfam": "ipv4", 00:41:44.852 "trsvcid": "$NVMF_PORT", 00:41:44.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:44.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:44.852 "hdgst": ${hdgst:-false}, 00:41:44.852 "ddgst": ${ddgst:-false} 00:41:44.852 }, 00:41:44.852 "method": "bdev_nvme_attach_controller" 00:41:44.852 } 00:41:44.852 EOF 00:41:44.852 )") 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:44.852 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:44.853 { 00:41:44.853 "params": { 00:41:44.853 "name": "Nvme$subsystem", 00:41:44.853 "trtype": "$TEST_TRANSPORT", 00:41:44.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:44.853 "adrfam": "ipv4", 00:41:44.853 "trsvcid": "$NVMF_PORT", 00:41:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:44.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:44.853 "hdgst": ${hdgst:-false}, 00:41:44.853 "ddgst": ${ddgst:-false} 00:41:44.853 }, 00:41:44.853 "method": "bdev_nvme_attach_controller" 00:41:44.853 } 00:41:44.853 EOF 00:41:44.853 )") 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:44.853 { 00:41:44.853 "params": { 00:41:44.853 "name": "Nvme$subsystem", 00:41:44.853 "trtype": "$TEST_TRANSPORT", 00:41:44.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:44.853 "adrfam": "ipv4", 00:41:44.853 "trsvcid": "$NVMF_PORT", 00:41:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:44.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:44.853 "hdgst": ${hdgst:-false}, 00:41:44.853 "ddgst": ${ddgst:-false} 00:41:44.853 }, 00:41:44.853 "method": "bdev_nvme_attach_controller" 00:41:44.853 } 00:41:44.853 EOF 00:41:44.853 )") 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:44.853 "params": { 00:41:44.853 "name": "Nvme0", 00:41:44.853 "trtype": "tcp", 00:41:44.853 "traddr": "10.0.0.2", 00:41:44.853 "adrfam": "ipv4", 00:41:44.853 "trsvcid": "4420", 00:41:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:44.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:44.853 "hdgst": false, 00:41:44.853 "ddgst": false 00:41:44.853 }, 00:41:44.853 "method": "bdev_nvme_attach_controller" 00:41:44.853 },{ 00:41:44.853 "params": { 00:41:44.853 "name": "Nvme1", 00:41:44.853 "trtype": "tcp", 00:41:44.853 "traddr": "10.0.0.2", 00:41:44.853 "adrfam": "ipv4", 00:41:44.853 "trsvcid": "4420", 00:41:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:44.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:44.853 "hdgst": false, 00:41:44.853 "ddgst": false 00:41:44.853 }, 00:41:44.853 "method": "bdev_nvme_attach_controller" 00:41:44.853 },{ 00:41:44.853 "params": { 00:41:44.853 "name": "Nvme2", 00:41:44.853 "trtype": "tcp", 00:41:44.853 "traddr": "10.0.0.2", 00:41:44.853 "adrfam": "ipv4", 00:41:44.853 "trsvcid": "4420", 00:41:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:44.853 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:44.853 "hdgst": false, 00:41:44.853 "ddgst": false 00:41:44.853 }, 00:41:44.853 "method": "bdev_nvme_attach_controller" 00:41:44.853 }' 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:44.853 01:09:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:44.853 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:44.853 ... 00:41:44.853 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:44.853 ... 00:41:44.853 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:44.853 ... 00:41:44.853 fio-3.35 00:41:44.853 Starting 24 threads 00:41:57.055 00:41:57.055 filename0: (groupid=0, jobs=1): err= 0: pid=479453: Sat Dec 7 01:09:11 2024 00:41:57.055 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10086msec) 00:41:57.055 slat (usec): min=8, max=102, avg=33.75, stdev=24.07 00:41:57.055 clat (msec): min=190, max=543, avg=387.65, stdev=69.58 00:41:57.055 lat (msec): min=190, max=543, avg=387.69, stdev=69.58 00:41:57.055 clat percentiles (msec): 00:41:57.055 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 279], 20.00th=[ 347], 00:41:57.055 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 397], 00:41:57.055 | 70.00th=[ 409], 80.00th=[ 430], 90.00th=[ 456], 95.00th=[ 535], 00:41:57.055 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:41:57.055 | 99.99th=[ 542] 00:41:57.055 bw ( KiB/s): min= 128, max= 256, per=3.19%, avg=168.42, stdev=56.03, samples=19 00:41:57.055 iops : min= 32, max= 64, avg=42.11, stdev=14.01, samples=19 00:41:57.055 lat (msec) : 250=0.48%, 500=91.83%, 750=7.69% 00:41:57.055 cpu : usr=98.40%, sys=1.09%, ctx=27, majf=0, minf=18 00:41:57.055 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:41:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.055 filename0: (groupid=0, jobs=1): err= 0: pid=479454: Sat Dec 7 01:09:11 2024 00:41:57.055 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10086msec) 00:41:57.055 slat (usec): min=6, max=119, avg=50.51, stdev=29.05 00:41:57.055 clat (msec): min=148, max=656, avg=387.48, stdev=65.38 00:41:57.055 lat (msec): min=148, max=656, avg=387.53, stdev=65.37 00:41:57.055 clat percentiles (msec): 00:41:57.055 | 1.00th=[ 259], 5.00th=[ 264], 10.00th=[ 300], 20.00th=[ 347], 00:41:57.055 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 397], 00:41:57.055 | 70.00th=[ 405], 80.00th=[ 430], 90.00th=[ 443], 95.00th=[ 518], 00:41:57.055 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 659], 99.95th=[ 659], 00:41:57.055 | 99.99th=[ 659] 00:41:57.055 bw ( KiB/s): min= 128, max= 256, per=3.19%, avg=168.42, stdev=61.13, samples=19 00:41:57.055 iops : min= 32, max= 64, avg=42.11, stdev=15.28, samples=19 00:41:57.055 lat (msec) : 250=0.48%, 500=93.75%, 750=5.77% 00:41:57.055 cpu : usr=98.00%, sys=1.42%, ctx=53, majf=0, minf=28 00:41:57.055 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.055 filename0: (groupid=0, jobs=1): err= 0: pid=479455: Sat Dec 7 01:09:11 2024 00:41:57.055 read: IOPS=61, BW=248KiB/s (254kB/s)(2504KiB/10114msec) 00:41:57.055 slat (usec): min=5, max=105, avg=17.09, stdev=17.32 00:41:57.055 clat (msec): min=112, max=427, avg=257.35, stdev=44.35 00:41:57.055 lat (msec): min=112, max=427, avg=257.36, stdev=44.34 00:41:57.055 clat percentiles (msec): 00:41:57.055 | 1.00th=[ 112], 5.00th=[ 123], 10.00th=[ 220], 20.00th=[ 251], 00:41:57.055 | 30.00th=[ 253], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 268], 00:41:57.055 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 296], 00:41:57.055 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:41:57.055 | 99.99th=[ 426] 00:41:57.055 bw ( KiB/s): min= 144, max= 368, per=4.61%, avg=244.00, stdev=54.17, samples=20 00:41:57.055 iops : min= 36, max= 92, avg=61.00, stdev=13.54, samples=20 00:41:57.055 lat (msec) : 250=18.05%, 500=81.95% 00:41:57.055 cpu : usr=98.39%, sys=1.16%, ctx=24, majf=0, minf=17 00:41:57.055 IO depths : 1=0.5%, 2=1.6%, 4=9.4%, 8=76.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:41:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 complete : 0=0.0%, 4=89.6%, 8=5.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.055 filename0: (groupid=0, jobs=1): err= 0: pid=479456: Sat Dec 7 01:09:11 2024 00:41:57.055 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10114msec) 00:41:57.055 slat (nsec): min=8125, max=92058, avg=21818.75, stdev=21463.05 00:41:57.055 clat (msec): min=110, max=415, avg=251.52, stdev=57.42 00:41:57.055 lat (msec): min=110, max=415, avg=251.54, stdev=57.41 00:41:57.055 clat percentiles (msec): 00:41:57.055 | 1.00th=[ 111], 5.00th=[ 123], 10.00th=[ 176], 20.00th=[ 199], 00:41:57.055 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 271], 00:41:57.055 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 338], 00:41:57.055 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:41:57.055 | 99.99th=[ 418] 00:41:57.055 bw ( KiB/s): min= 176, max= 384, per=4.73%, avg=249.60, stdev=58.13, samples=20 00:41:57.055 iops : min= 44, max= 96, avg=62.40, stdev=14.53, samples=20 00:41:57.055 lat (msec) : 250=29.38%, 500=70.62% 00:41:57.055 cpu : usr=98.49%, sys=1.08%, ctx=21, majf=0, minf=29 00:41:57.055 IO depths : 1=0.2%, 2=0.6%, 4=7.2%, 8=79.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:41:57.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 complete : 0=0.0%, 4=88.9%, 8=6.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.055 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.055 filename0: (groupid=0, jobs=1): err= 0: pid=479457: Sat Dec 7 01:09:11 2024 00:41:57.055 read: IOPS=56, BW=224KiB/s (230kB/s)(2264KiB/10089msec) 00:41:57.055 slat (usec): min=6, max=104, avg=24.13, stdev=25.65 00:41:57.055 clat (msec): min=195, max=481, avg=284.65, stdev=44.64 00:41:57.055 lat (msec): min=195, max=481, avg=284.67, stdev=44.65 00:41:57.055 clat percentiles (msec): 00:41:57.055 | 1.00th=[ 197], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:41:57.056 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:41:57.056 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 384], 95.00th=[ 393], 00:41:57.056 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 481], 99.95th=[ 481], 00:41:57.056 | 99.99th=[ 481] 00:41:57.056 bw ( KiB/s): min= 128, max= 256, per=4.18%, avg=220.00, stdev=52.65, samples=20 00:41:57.056 iops : min= 32, max= 64, avg=55.00, stdev=13.16, samples=20 00:41:57.056 lat (msec) : 250=8.48%, 500=91.52% 00:41:57.056 cpu : usr=98.41%, sys=1.06%, ctx=40, majf=0, minf=22 00:41:57.056 IO depths : 1=1.2%, 2=5.8%, 4=20.0%, 8=61.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename0: (groupid=0, jobs=1): err= 0: pid=479458: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=44, BW=177KiB/s (181kB/s)(1784KiB/10089msec) 00:41:57.056 slat (usec): min=7, max=115, avg=20.22, stdev=17.29 00:41:57.056 clat (msec): min=158, max=552, avg=361.70, stdev=73.87 00:41:57.056 lat (msec): min=158, max=552, avg=361.72, stdev=73.87 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 159], 5.00th=[ 255], 10.00th=[ 266], 20.00th=[ 288], 00:41:57.056 | 30.00th=[ 326], 40.00th=[ 372], 50.00th=[ 388], 60.00th=[ 388], 00:41:57.056 | 70.00th=[ 393], 80.00th=[ 422], 90.00th=[ 435], 95.00th=[ 439], 00:41:57.056 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:41:57.056 | 99.99th=[ 550] 00:41:57.056 bw ( KiB/s): min= 128, max= 256, per=3.26%, avg=172.00, stdev=56.84, samples=20 00:41:57.056 iops : min= 32, max= 64, avg=43.00, stdev=14.21, samples=20 00:41:57.056 lat (msec) : 250=4.93%, 500=91.48%, 750=3.59% 00:41:57.056 cpu : usr=98.18%, sys=1.22%, ctx=55, majf=0, minf=26 00:41:57.056 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename0: (groupid=0, jobs=1): err= 0: pid=479459: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=64, BW=257KiB/s (263kB/s)(2600KiB/10117msec) 00:41:57.056 slat (nsec): min=8114, max=84007, avg=12972.87, stdev=8635.49 00:41:57.056 clat (msec): min=16, max=422, avg=247.77, stdev=65.67 00:41:57.056 lat (msec): min=16, max=422, avg=247.78, stdev=65.66 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 17], 5.00th=[ 122], 10.00th=[ 186], 20.00th=[ 215], 00:41:57.056 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 268], 00:41:57.056 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 296], 95.00th=[ 334], 00:41:57.056 | 99.00th=[ 414], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:41:57.056 | 99.99th=[ 422] 00:41:57.056 bw ( KiB/s): min= 176, max= 496, per=4.80%, avg=253.60, stdev=72.35, samples=20 00:41:57.056 iops : min= 44, max= 124, avg=63.40, stdev=18.09, samples=20 00:41:57.056 lat (msec) : 20=2.46%, 100=2.46%, 250=27.08%, 500=68.00% 00:41:57.056 cpu : usr=98.46%, sys=1.04%, ctx=17, majf=0, minf=33 00:41:57.056 IO depths : 1=0.5%, 2=1.4%, 4=8.6%, 8=77.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=89.3%, 8=5.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename0: (groupid=0, jobs=1): err= 0: pid=479460: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=61, BW=246KiB/s (252kB/s)(2480KiB/10089msec) 00:41:57.056 slat (usec): min=8, max=114, avg=38.69, stdev=32.79 00:41:57.056 clat (msec): min=151, max=434, avg=259.49, stdev=43.22 00:41:57.056 lat (msec): min=151, max=434, avg=259.53, stdev=43.21 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 157], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 230], 00:41:57.056 | 30.00th=[ 251], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 268], 00:41:57.056 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 305], 00:41:57.056 | 99.00th=[ 393], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 435], 00:41:57.056 | 99.99th=[ 435] 00:41:57.056 bw ( KiB/s): min= 128, max= 384, per=4.57%, avg=241.60, stdev=53.67, samples=20 00:41:57.056 iops : min= 32, max= 96, avg=60.40, stdev=13.42, samples=20 00:41:57.056 lat (msec) : 250=24.84%, 500=75.16% 00:41:57.056 cpu : usr=98.28%, sys=1.20%, ctx=36, majf=0, minf=31 00:41:57.056 IO depths : 1=0.2%, 2=0.5%, 4=7.1%, 8=79.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=89.0%, 8=5.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename1: (groupid=0, jobs=1): err= 0: pid=479461: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=62, BW=251KiB/s (257kB/s)(2544KiB/10139msec) 00:41:57.056 slat (usec): min=4, max=115, avg=29.92, stdev=30.80 00:41:57.056 clat (msec): min=40, max=450, avg=254.69, stdev=64.83 00:41:57.056 lat (msec): min=40, max=450, avg=254.72, stdev=64.83 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 41], 5.00th=[ 96], 10.00th=[ 161], 20.00th=[ 232], 00:41:57.056 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 268], 00:41:57.056 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 305], 95.00th=[ 334], 00:41:57.056 | 99.00th=[ 409], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:41:57.056 | 99.99th=[ 451] 00:41:57.056 bw ( KiB/s): min= 128, max= 383, per=4.69%, avg=247.95, stdev=48.41, samples=20 00:41:57.056 iops : min= 32, max= 95, avg=61.95, stdev=11.99, samples=20 00:41:57.056 lat (msec) : 50=2.52%, 100=2.52%, 250=17.45%, 500=77.52% 00:41:57.056 cpu : usr=98.27%, sys=1.21%, ctx=32, majf=0, minf=24 00:41:57.056 IO depths : 1=1.3%, 2=3.3%, 4=12.1%, 8=72.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=90.4%, 8=4.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename1: (groupid=0, jobs=1): err= 0: pid=479462: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=58, BW=233KiB/s (239kB/s)(2360KiB/10130msec) 00:41:57.056 slat (usec): min=5, max=114, avg=34.34, stdev=31.82 00:41:57.056 clat (msec): min=111, max=506, avg=273.75, stdev=55.31 00:41:57.056 lat (msec): min=111, max=506, avg=273.79, stdev=55.31 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 112], 5.00th=[ 122], 10.00th=[ 245], 20.00th=[ 253], 00:41:57.056 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:41:57.056 | 70.00th=[ 279], 80.00th=[ 292], 90.00th=[ 326], 95.00th=[ 384], 00:41:57.056 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 506], 99.95th=[ 506], 00:41:57.056 | 99.99th=[ 506] 00:41:57.056 bw ( KiB/s): min= 128, max= 368, per=4.35%, avg=229.60, stdev=53.51, samples=20 00:41:57.056 iops : min= 32, max= 92, avg=57.40, stdev=13.38, samples=20 00:41:57.056 lat (msec) : 250=15.59%, 500=84.07%, 750=0.34% 00:41:57.056 cpu : usr=98.41%, sys=1.08%, ctx=26, majf=0, minf=19 00:41:57.056 IO depths : 1=1.0%, 2=4.2%, 4=15.6%, 8=67.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename1: (groupid=0, jobs=1): err= 0: pid=479463: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=58, BW=236KiB/s (241kB/s)(2384KiB/10115msec) 00:41:57.056 slat (usec): min=8, max=109, avg=37.34, stdev=33.30 00:41:57.056 clat (msec): min=105, max=440, avg=270.63, stdev=53.72 00:41:57.056 lat (msec): min=105, max=440, avg=270.67, stdev=53.73 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 112], 5.00th=[ 123], 10.00th=[ 236], 20.00th=[ 251], 00:41:57.056 | 30.00th=[ 255], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:41:57.056 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 326], 95.00th=[ 376], 00:41:57.056 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:41:57.056 | 99.99th=[ 443] 00:41:57.056 bw ( KiB/s): min= 128, max= 368, per=4.40%, avg=232.00, stdev=54.57, samples=20 00:41:57.056 iops : min= 32, max= 92, avg=58.00, stdev=13.64, samples=20 00:41:57.056 lat (msec) : 250=17.45%, 500=82.55% 00:41:57.056 cpu : usr=98.19%, sys=1.26%, ctx=61, majf=0, minf=26 00:41:57.056 IO depths : 1=0.8%, 2=3.9%, 4=14.9%, 8=68.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.056 filename1: (groupid=0, jobs=1): err= 0: pid=479464: Sat Dec 7 01:09:11 2024 00:41:57.056 read: IOPS=60, BW=240KiB/s (246kB/s)(2424KiB/10089msec) 00:41:57.056 slat (nsec): min=7357, max=86546, avg=16440.12, stdev=15051.23 00:41:57.056 clat (msec): min=159, max=458, avg=266.02, stdev=34.11 00:41:57.056 lat (msec): min=159, max=458, avg=266.03, stdev=34.11 00:41:57.056 clat percentiles (msec): 00:41:57.056 | 1.00th=[ 161], 5.00th=[ 220], 10.00th=[ 245], 20.00th=[ 253], 00:41:57.056 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 275], 00:41:57.056 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 305], 00:41:57.056 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 460], 99.95th=[ 460], 00:41:57.056 | 99.99th=[ 460] 00:41:57.056 bw ( KiB/s): min= 128, max= 272, per=4.46%, avg=236.00, stdev=42.77, samples=20 00:41:57.056 iops : min= 32, max= 68, avg=59.00, stdev=10.69, samples=20 00:41:57.056 lat (msec) : 250=15.84%, 500=84.16% 00:41:57.056 cpu : usr=98.38%, sys=1.07%, ctx=26, majf=0, minf=22 00:41:57.056 IO depths : 1=0.5%, 2=6.8%, 4=25.1%, 8=55.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:41:57.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.056 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename1: (groupid=0, jobs=1): err= 0: pid=479465: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=52, BW=208KiB/s (213kB/s)(2104KiB/10092msec) 00:41:57.057 slat (usec): min=4, max=120, avg=32.03, stdev=30.73 00:41:57.057 clat (msec): min=159, max=462, avg=306.55, stdev=56.38 00:41:57.057 lat (msec): min=159, max=462, avg=306.58, stdev=56.39 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 159], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 268], 00:41:57.057 | 30.00th=[ 279], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 292], 00:41:57.057 | 70.00th=[ 347], 80.00th=[ 380], 90.00th=[ 393], 95.00th=[ 397], 00:41:57.057 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 464], 99.95th=[ 464], 00:41:57.057 | 99.99th=[ 464] 00:41:57.057 bw ( KiB/s): min= 128, max= 256, per=3.85%, avg=204.00, stdev=57.54, samples=20 00:41:57.057 iops : min= 32, max= 64, avg=51.00, stdev=14.39, samples=20 00:41:57.057 lat (msec) : 250=3.04%, 500=96.96% 00:41:57.057 cpu : usr=98.47%, sys=1.07%, ctx=16, majf=0, minf=19 00:41:57.057 IO depths : 1=0.4%, 2=6.7%, 4=25.1%, 8=55.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename1: (groupid=0, jobs=1): err= 0: pid=479466: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=55, BW=223KiB/s (228kB/s)(2248KiB/10091msec) 00:41:57.057 slat (nsec): min=4379, max=60003, avg=12840.29, stdev=7443.05 00:41:57.057 clat (msec): min=147, max=498, avg=286.92, stdev=56.79 00:41:57.057 lat (msec): min=147, max=498, avg=286.94, stdev=56.79 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 148], 5.00th=[ 199], 10.00th=[ 245], 20.00th=[ 262], 00:41:57.057 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:41:57.057 | 70.00th=[ 288], 80.00th=[ 326], 90.00th=[ 384], 95.00th=[ 393], 00:41:57.057 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:41:57.057 | 99.99th=[ 498] 00:41:57.057 bw ( KiB/s): min= 128, max= 256, per=4.14%, avg=218.40, stdev=49.59, samples=20 00:41:57.057 iops : min= 32, max= 64, avg=54.60, stdev=12.40, samples=20 00:41:57.057 lat (msec) : 250=13.52%, 500=86.48% 00:41:57.057 cpu : usr=98.44%, sys=1.04%, ctx=38, majf=0, minf=24 00:41:57.057 IO depths : 1=1.8%, 2=5.0%, 4=15.7%, 8=66.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=91.3%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename1: (groupid=0, jobs=1): err= 0: pid=479467: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=61, BW=245KiB/s (251kB/s)(2480KiB/10113msec) 00:41:57.057 slat (usec): min=4, max=115, avg=14.67, stdev=13.84 00:41:57.057 clat (msec): min=152, max=413, avg=260.83, stdev=43.58 00:41:57.057 lat (msec): min=152, max=413, avg=260.85, stdev=43.58 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 153], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 241], 00:41:57.057 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 266], 00:41:57.057 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 342], 00:41:57.057 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:41:57.057 | 99.99th=[ 414] 00:41:57.057 bw ( KiB/s): min= 144, max= 384, per=4.57%, avg=241.60, stdev=51.62, samples=20 00:41:57.057 iops : min= 36, max= 96, avg=60.40, stdev=12.91, samples=20 00:41:57.057 lat (msec) : 250=25.81%, 500=74.19% 00:41:57.057 cpu : usr=98.69%, sys=0.89%, ctx=15, majf=0, minf=21 00:41:57.057 IO depths : 1=1.3%, 2=5.8%, 4=19.7%, 8=61.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename1: (groupid=0, jobs=1): err= 0: pid=479468: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10095msec) 00:41:57.057 slat (nsec): min=6140, max=58764, avg=30192.77, stdev=9569.35 00:41:57.057 clat (msec): min=195, max=551, avg=388.00, stdev=63.51 00:41:57.057 lat (msec): min=195, max=551, avg=388.03, stdev=63.51 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 253], 5.00th=[ 266], 10.00th=[ 279], 20.00th=[ 368], 00:41:57.057 | 30.00th=[ 372], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 397], 00:41:57.057 | 70.00th=[ 409], 80.00th=[ 430], 90.00th=[ 456], 95.00th=[ 502], 00:41:57.057 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:41:57.057 | 99.99th=[ 550] 00:41:57.057 bw ( KiB/s): min= 112, max= 256, per=3.04%, avg=160.00, stdev=51.91, samples=20 00:41:57.057 iops : min= 28, max= 64, avg=40.00, stdev=12.98, samples=20 00:41:57.057 lat (msec) : 250=0.96%, 500=94.23%, 750=4.81% 00:41:57.057 cpu : usr=97.99%, sys=1.41%, ctx=8, majf=0, minf=28 00:41:57.057 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename2: (groupid=0, jobs=1): err= 0: pid=479469: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=55, BW=222KiB/s (228kB/s)(2248KiB/10106msec) 00:41:57.057 slat (nsec): min=4053, max=56304, avg=15628.77, stdev=9987.80 00:41:57.057 clat (msec): min=165, max=501, avg=287.27, stdev=55.44 00:41:57.057 lat (msec): min=165, max=501, avg=287.29, stdev=55.44 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 167], 5.00th=[ 232], 10.00th=[ 247], 20.00th=[ 253], 00:41:57.057 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 279], 00:41:57.057 | 70.00th=[ 296], 80.00th=[ 321], 90.00th=[ 384], 95.00th=[ 397], 00:41:57.057 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 502], 99.95th=[ 502], 00:41:57.057 | 99.99th=[ 502] 00:41:57.057 bw ( KiB/s): min= 128, max= 256, per=4.14%, avg=218.40, stdev=38.24, samples=20 00:41:57.057 iops : min= 32, max= 64, avg=54.60, stdev= 9.56, samples=20 00:41:57.057 lat (msec) : 250=16.01%, 500=83.63%, 750=0.36% 00:41:57.057 cpu : usr=98.58%, sys=0.99%, ctx=16, majf=0, minf=28 00:41:57.057 IO depths : 1=0.9%, 2=3.2%, 4=12.5%, 8=71.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename2: (groupid=0, jobs=1): err= 0: pid=479470: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10088msec) 00:41:57.057 slat (usec): min=8, max=110, avg=31.13, stdev=30.57 00:41:57.057 clat (msec): min=148, max=656, avg=387.69, stdev=63.63 00:41:57.057 lat (msec): min=148, max=656, avg=387.72, stdev=63.63 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 257], 5.00th=[ 300], 10.00th=[ 300], 20.00th=[ 368], 00:41:57.057 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 393], 00:41:57.057 | 70.00th=[ 405], 80.00th=[ 426], 90.00th=[ 439], 95.00th=[ 451], 00:41:57.057 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 659], 99.95th=[ 659], 00:41:57.057 | 99.99th=[ 659] 00:41:57.057 bw ( KiB/s): min= 112, max= 256, per=3.19%, avg=168.42, stdev=61.36, samples=19 00:41:57.057 iops : min= 28, max= 64, avg=42.11, stdev=15.34, samples=19 00:41:57.057 lat (msec) : 250=0.96%, 500=94.23%, 750=4.81% 00:41:57.057 cpu : usr=98.58%, sys=0.94%, ctx=18, majf=0, minf=20 00:41:57.057 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename2: (groupid=0, jobs=1): err= 0: pid=479471: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=67, BW=271KiB/s (277kB/s)(2740KiB/10116msec) 00:41:57.057 slat (nsec): min=7062, max=38708, avg=10840.15, stdev=3846.58 00:41:57.057 clat (msec): min=9, max=409, avg=235.50, stdev=65.88 00:41:57.057 lat (msec): min=9, max=409, avg=235.51, stdev=65.88 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 10], 5.00th=[ 63], 10.00th=[ 140], 20.00th=[ 197], 00:41:57.057 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:41:57.057 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 288], 00:41:57.057 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:41:57.057 | 99.99th=[ 409] 00:41:57.057 bw ( KiB/s): min= 176, max= 616, per=5.07%, avg=267.60, stdev=94.71, samples=20 00:41:57.057 iops : min= 44, max= 154, avg=66.90, stdev=23.68, samples=20 00:41:57.057 lat (msec) : 10=1.02%, 20=1.31%, 100=3.80%, 250=25.26%, 500=68.61% 00:41:57.057 cpu : usr=98.09%, sys=1.32%, ctx=37, majf=0, minf=35 00:41:57.057 IO depths : 1=0.1%, 2=0.3%, 4=6.4%, 8=80.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:41:57.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 complete : 0=0.0%, 4=88.8%, 8=5.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.057 issued rwts: total=685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.057 filename2: (groupid=0, jobs=1): err= 0: pid=479472: Sat Dec 7 01:09:11 2024 00:41:57.057 read: IOPS=59, BW=237KiB/s (242kB/s)(2392KiB/10114msec) 00:41:57.057 slat (usec): min=8, max=109, avg=25.99, stdev=27.58 00:41:57.057 clat (msec): min=112, max=496, avg=269.83, stdev=49.70 00:41:57.057 lat (msec): min=112, max=496, avg=269.85, stdev=49.70 00:41:57.057 clat percentiles (msec): 00:41:57.057 | 1.00th=[ 113], 5.00th=[ 122], 10.00th=[ 249], 20.00th=[ 255], 00:41:57.057 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 275], 00:41:57.057 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 321], 95.00th=[ 368], 00:41:57.057 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 498], 99.95th=[ 498], 00:41:57.057 | 99.99th=[ 498] 00:41:57.057 bw ( KiB/s): min= 128, max= 384, per=4.40%, avg=232.80, stdev=56.26, samples=20 00:41:57.058 iops : min= 32, max= 96, avg=58.20, stdev=14.07, samples=20 00:41:57.058 lat (msec) : 250=12.04%, 500=87.96% 00:41:57.058 cpu : usr=98.37%, sys=1.11%, ctx=30, majf=0, minf=22 00:41:57.058 IO depths : 1=1.0%, 2=3.5%, 4=13.7%, 8=70.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:57.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 complete : 0=0.0%, 4=90.9%, 8=3.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.058 filename2: (groupid=0, jobs=1): err= 0: pid=479473: Sat Dec 7 01:09:11 2024 00:41:57.058 read: IOPS=60, BW=242KiB/s (248kB/s)(2448KiB/10116msec) 00:41:57.058 slat (nsec): min=8241, max=64311, avg=14928.57, stdev=9579.88 00:41:57.058 clat (msec): min=113, max=434, avg=263.60, stdev=49.05 00:41:57.058 lat (msec): min=113, max=434, avg=263.62, stdev=49.04 00:41:57.058 clat percentiles (msec): 00:41:57.058 | 1.00th=[ 114], 5.00th=[ 123], 10.00th=[ 218], 20.00th=[ 251], 00:41:57.058 | 30.00th=[ 255], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 271], 00:41:57.058 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 326], 00:41:57.058 | 99.00th=[ 418], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:41:57.058 | 99.99th=[ 435] 00:41:57.058 bw ( KiB/s): min= 128, max= 384, per=4.52%, avg=238.40, stdev=52.91, samples=20 00:41:57.058 iops : min= 32, max= 96, avg=59.60, stdev=13.23, samples=20 00:41:57.058 lat (msec) : 250=19.28%, 500=80.72% 00:41:57.058 cpu : usr=98.59%, sys=0.97%, ctx=20, majf=0, minf=28 00:41:57.058 IO depths : 1=1.1%, 2=3.4%, 4=12.7%, 8=71.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:41:57.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 complete : 0=0.0%, 4=90.5%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 issued rwts: total=612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.058 filename2: (groupid=0, jobs=1): err= 0: pid=479474: Sat Dec 7 01:09:11 2024 00:41:57.058 read: IOPS=59, BW=240KiB/s (245kB/s)(2424KiB/10111msec) 00:41:57.058 slat (nsec): min=4321, max=60127, avg=15572.97, stdev=10153.45 00:41:57.058 clat (msec): min=159, max=415, avg=266.65, stdev=28.34 00:41:57.058 lat (msec): min=159, max=416, avg=266.66, stdev=28.34 00:41:57.058 clat percentiles (msec): 00:41:57.058 | 1.00th=[ 161], 5.00th=[ 230], 10.00th=[ 247], 20.00th=[ 253], 00:41:57.058 | 30.00th=[ 255], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 275], 00:41:57.058 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 321], 00:41:57.058 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:41:57.058 | 99.99th=[ 418] 00:41:57.058 bw ( KiB/s): min= 144, max= 256, per=4.46%, avg=236.00, stdev=40.17, samples=20 00:41:57.058 iops : min= 36, max= 64, avg=59.00, stdev=10.04, samples=20 00:41:57.058 lat (msec) : 250=13.53%, 500=86.47% 00:41:57.058 cpu : usr=98.50%, sys=1.08%, ctx=18, majf=0, minf=28 00:41:57.058 IO depths : 1=0.7%, 2=6.9%, 4=25.1%, 8=55.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:41:57.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.058 filename2: (groupid=0, jobs=1): err= 0: pid=479475: Sat Dec 7 01:09:11 2024 00:41:57.058 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10089msec) 00:41:57.058 slat (nsec): min=8199, max=92246, avg=15782.83, stdev=9548.51 00:41:57.058 clat (msec): min=159, max=535, avg=305.56, stdev=60.27 00:41:57.058 lat (msec): min=159, max=535, avg=305.57, stdev=60.27 00:41:57.058 clat percentiles (msec): 00:41:57.058 | 1.00th=[ 159], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 262], 00:41:57.058 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 292], 00:41:57.058 | 70.00th=[ 326], 80.00th=[ 380], 90.00th=[ 393], 95.00th=[ 397], 00:41:57.058 | 99.00th=[ 409], 99.50th=[ 510], 99.90th=[ 535], 99.95th=[ 535], 00:41:57.058 | 99.99th=[ 535] 00:41:57.058 bw ( KiB/s): min= 128, max= 256, per=3.87%, avg=204.80, stdev=62.85, samples=20 00:41:57.058 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:57.058 lat (msec) : 250=4.17%, 500=95.08%, 750=0.76% 00:41:57.058 cpu : usr=98.22%, sys=1.19%, ctx=38, majf=0, minf=19 00:41:57.058 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:41:57.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.058 filename2: (groupid=0, jobs=1): err= 0: pid=479476: Sat Dec 7 01:09:11 2024 00:41:57.058 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10086msec) 00:41:57.058 slat (usec): min=8, max=120, avg=47.76, stdev=28.07 00:41:57.058 clat (msec): min=148, max=656, avg=387.52, stdev=71.87 00:41:57.058 lat (msec): min=148, max=656, avg=387.57, stdev=71.87 00:41:57.058 clat percentiles (msec): 00:41:57.058 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 279], 20.00th=[ 347], 00:41:57.058 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 397], 00:41:57.058 | 70.00th=[ 405], 80.00th=[ 430], 90.00th=[ 456], 95.00th=[ 535], 00:41:57.058 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 659], 99.95th=[ 659], 00:41:57.058 | 99.99th=[ 659] 00:41:57.058 bw ( KiB/s): min= 128, max= 256, per=3.19%, avg=168.42, stdev=59.48, samples=19 00:41:57.058 iops : min= 32, max= 64, avg=42.11, stdev=14.87, samples=19 00:41:57.058 lat (msec) : 250=0.96%, 500=90.87%, 750=8.17% 00:41:57.058 cpu : usr=98.29%, sys=1.15%, ctx=32, majf=0, minf=18 00:41:57.058 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:57.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:57.058 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:57.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:57.058 00:41:57.058 Run status group 0 (all jobs): 00:41:57.058 READ: bw=5269KiB/s (5395kB/s), 165KiB/s-271KiB/s (169kB/s-277kB/s), io=52.2MiB (54.7MB), run=10086-10139msec 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.058 bdev_null0 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.058 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 [2024-12-07 01:09:11.864679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 bdev_null1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:57.059 { 00:41:57.059 "params": { 00:41:57.059 "name": "Nvme$subsystem", 00:41:57.059 "trtype": "$TEST_TRANSPORT", 00:41:57.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:57.059 "adrfam": "ipv4", 00:41:57.059 "trsvcid": "$NVMF_PORT", 00:41:57.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:57.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:57.059 "hdgst": ${hdgst:-false}, 00:41:57.059 "ddgst": ${ddgst:-false} 00:41:57.059 }, 00:41:57.059 "method": "bdev_nvme_attach_controller" 00:41:57.059 } 00:41:57.059 EOF 00:41:57.059 )") 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:57.059 { 00:41:57.059 "params": { 00:41:57.059 "name": "Nvme$subsystem", 00:41:57.059 "trtype": "$TEST_TRANSPORT", 00:41:57.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:57.059 "adrfam": "ipv4", 00:41:57.059 "trsvcid": "$NVMF_PORT", 00:41:57.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:57.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:57.059 "hdgst": ${hdgst:-false}, 00:41:57.059 "ddgst": ${ddgst:-false} 00:41:57.059 }, 00:41:57.059 "method": "bdev_nvme_attach_controller" 00:41:57.059 } 00:41:57.059 EOF 00:41:57.059 )") 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:57.059 "params": { 00:41:57.059 "name": "Nvme0", 00:41:57.059 "trtype": "tcp", 00:41:57.059 "traddr": "10.0.0.2", 00:41:57.059 "adrfam": "ipv4", 00:41:57.059 "trsvcid": "4420", 00:41:57.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:57.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:57.059 "hdgst": false, 00:41:57.059 "ddgst": false 00:41:57.059 }, 00:41:57.059 "method": "bdev_nvme_attach_controller" 00:41:57.059 },{ 00:41:57.059 "params": { 00:41:57.059 "name": "Nvme1", 00:41:57.059 "trtype": "tcp", 00:41:57.059 "traddr": "10.0.0.2", 00:41:57.059 "adrfam": "ipv4", 00:41:57.059 "trsvcid": "4420", 00:41:57.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:57.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:57.059 "hdgst": false, 00:41:57.059 "ddgst": false 00:41:57.059 }, 00:41:57.059 "method": "bdev_nvme_attach_controller" 00:41:57.059 }' 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:57.059 01:09:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:57.059 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:57.059 ... 00:41:57.059 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:57.059 ... 00:41:57.059 fio-3.35 00:41:57.059 Starting 4 threads 00:42:02.326 00:42:02.326 filename0: (groupid=0, jobs=1): err= 0: pid=481384: Sat Dec 7 01:09:17 2024 00:42:02.326 read: IOPS=1816, BW=14.2MiB/s (14.9MB/s)(71.0MiB/5001msec) 00:42:02.326 slat (nsec): min=7144, max=71779, avg=17010.66, stdev=10585.08 00:42:02.326 clat (usec): min=912, max=7816, avg=4344.23, stdev=501.41 00:42:02.326 lat (usec): min=932, max=7830, avg=4361.24, stdev=501.22 00:42:02.326 clat percentiles (usec): 00:42:02.326 | 1.00th=[ 2900], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4146], 00:42:02.326 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:42:02.326 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5080], 00:42:02.326 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7767], 00:42:02.326 | 99.99th=[ 7832] 00:42:02.326 bw ( KiB/s): min=14204, max=14896, per=25.07%, avg=14575.56, stdev=192.30, samples=9 00:42:02.326 iops : min= 1775, max= 1862, avg=1821.89, stdev=24.16, samples=9 00:42:02.326 lat (usec) : 1000=0.02% 00:42:02.326 lat (msec) : 2=0.37%, 4=11.96%, 10=87.65% 00:42:02.326 cpu : usr=95.90%, sys=3.62%, ctx=10, majf=0, minf=9 00:42:02.326 IO depths : 1=1.1%, 2=16.7%, 4=56.1%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 issued rwts: total=9084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:02.326 filename0: (groupid=0, jobs=1): err= 0: pid=481385: Sat Dec 7 01:09:17 2024 00:42:02.326 read: IOPS=1801, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5002msec) 00:42:02.326 slat (nsec): min=6611, max=63779, avg=21636.04, stdev=9432.18 00:42:02.326 clat (usec): min=776, max=7859, avg=4355.93, stdev=630.25 00:42:02.326 lat (usec): min=789, max=7869, avg=4377.56, stdev=629.96 00:42:02.326 clat percentiles (usec): 00:42:02.326 | 1.00th=[ 2114], 5.00th=[ 3687], 10.00th=[ 3949], 20.00th=[ 4146], 00:42:02.326 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:42:02.326 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5342], 00:42:02.326 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7701], 99.95th=[ 7767], 00:42:02.326 | 99.99th=[ 7832] 00:42:02.326 bw ( KiB/s): min=14288, max=14704, per=24.81%, avg=14419.56, stdev=126.69, samples=9 00:42:02.326 iops : min= 1786, max= 1838, avg=1802.44, stdev=15.84, samples=9 00:42:02.326 lat (usec) : 1000=0.08% 00:42:02.326 lat (msec) : 2=0.84%, 4=10.25%, 10=88.83% 00:42:02.326 cpu : usr=96.68%, sys=2.82%, ctx=7, majf=0, minf=9 00:42:02.326 IO depths : 1=0.5%, 2=20.5%, 4=53.3%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 issued rwts: total=9013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:02.326 filename1: (groupid=0, jobs=1): err= 0: pid=481386: Sat Dec 7 01:09:17 2024 00:42:02.326 read: IOPS=1842, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5003msec) 00:42:02.326 slat (nsec): min=6754, max=95502, avg=15510.70, stdev=9922.79 00:42:02.326 clat (usec): min=941, max=8294, avg=4287.46, stdev=498.49 00:42:02.326 lat (usec): min=959, max=8314, avg=4302.97, stdev=498.91 00:42:02.326 clat percentiles (usec): 00:42:02.326 | 1.00th=[ 2737], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4047], 00:42:02.326 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:42:02.326 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4817], 00:42:02.326 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7701], 00:42:02.326 | 99.99th=[ 8291] 00:42:02.326 bw ( KiB/s): min=14384, max=15104, per=25.35%, avg=14738.80, stdev=230.09, samples=10 00:42:02.326 iops : min= 1798, max= 1888, avg=1842.30, stdev=28.82, samples=10 00:42:02.326 lat (usec) : 1000=0.02% 00:42:02.326 lat (msec) : 2=0.43%, 4=16.76%, 10=82.78% 00:42:02.326 cpu : usr=95.46%, sys=4.04%, ctx=11, majf=0, minf=9 00:42:02.326 IO depths : 1=0.9%, 2=14.9%, 4=57.8%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 issued rwts: total=9218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:02.326 filename1: (groupid=0, jobs=1): err= 0: pid=481387: Sat Dec 7 01:09:17 2024 00:42:02.326 read: IOPS=1807, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5004msec) 00:42:02.326 slat (nsec): min=6703, max=73311, avg=20346.34, stdev=11106.38 00:42:02.326 clat (usec): min=800, max=8067, avg=4350.31, stdev=617.75 00:42:02.326 lat (usec): min=818, max=8074, avg=4370.66, stdev=617.65 00:42:02.326 clat percentiles (usec): 00:42:02.326 | 1.00th=[ 2212], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4146], 00:42:02.326 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:42:02.326 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5276], 00:42:02.326 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7701], 99.95th=[ 7898], 00:42:02.326 | 99.99th=[ 8094] 00:42:02.326 bw ( KiB/s): min=14304, max=14640, per=24.87%, avg=14459.20, stdev=103.98, samples=10 00:42:02.326 iops : min= 1788, max= 1830, avg=1807.40, stdev=13.00, samples=10 00:42:02.326 lat (usec) : 1000=0.07% 00:42:02.326 lat (msec) : 2=0.83%, 4=11.53%, 10=87.57% 00:42:02.326 cpu : usr=95.62%, sys=3.90%, ctx=6, majf=0, minf=9 00:42:02.326 IO depths : 1=0.5%, 2=19.7%, 4=54.3%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.326 issued rwts: total=9044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:02.326 00:42:02.326 Run status group 0 (all jobs): 00:42:02.326 READ: bw=56.8MiB/s (59.5MB/s), 14.1MiB/s-14.4MiB/s (14.8MB/s-15.1MB/s), io=284MiB (298MB), run=5001-5004msec 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.326 00:42:02.326 real 0m24.237s 00:42:02.326 user 4m36.673s 00:42:02.326 sys 0m5.230s 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 ************************************ 00:42:02.326 END TEST fio_dif_rand_params 00:42:02.326 ************************************ 00:42:02.326 01:09:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:02.326 01:09:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:02.326 01:09:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.326 01:09:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.326 ************************************ 00:42:02.326 START TEST fio_dif_digest 00:42:02.326 ************************************ 00:42:02.326 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:02.326 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:02.327 bdev_null0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:02.327 [2024-12-07 01:09:18.248181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.327 { 00:42:02.327 "params": { 00:42:02.327 "name": "Nvme$subsystem", 00:42:02.327 "trtype": "$TEST_TRANSPORT", 00:42:02.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.327 "adrfam": "ipv4", 00:42:02.327 "trsvcid": "$NVMF_PORT", 00:42:02.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.327 "hdgst": ${hdgst:-false}, 00:42:02.327 "ddgst": ${ddgst:-false} 00:42:02.327 }, 00:42:02.327 "method": "bdev_nvme_attach_controller" 00:42:02.327 } 00:42:02.327 EOF 00:42:02.327 )") 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:02.327 "params": { 00:42:02.327 "name": "Nvme0", 00:42:02.327 "trtype": "tcp", 00:42:02.327 "traddr": "10.0.0.2", 00:42:02.327 "adrfam": "ipv4", 00:42:02.327 "trsvcid": "4420", 00:42:02.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:02.327 "hdgst": true, 00:42:02.327 "ddgst": true 00:42:02.327 }, 00:42:02.327 "method": "bdev_nvme_attach_controller" 00:42:02.327 }' 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:02.327 01:09:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.587 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:02.587 ... 00:42:02.587 fio-3.35 00:42:02.587 Starting 3 threads 00:42:14.782 00:42:14.782 filename0: (groupid=0, jobs=1): err= 0: pid=482143: Sat Dec 7 01:09:29 2024 00:42:14.782 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10044msec) 00:42:14.782 slat (nsec): min=7627, max=98533, avg=16440.43, stdev=4836.21 00:42:14.782 clat (usec): min=8730, max=52637, avg=15169.94, stdev=1515.81 00:42:14.782 lat (usec): min=8744, max=52651, avg=15186.38, stdev=1515.87 00:42:14.782 clat percentiles (usec): 00:42:14.782 | 1.00th=[12518], 5.00th=[13435], 10.00th=[13960], 20.00th=[14353], 00:42:14.782 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:42:14.782 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:42:14.782 | 99.00th=[17695], 99.50th=[18220], 99.90th=[44303], 99.95th=[52691], 00:42:14.782 | 99.99th=[52691] 00:42:14.782 bw ( KiB/s): min=24320, max=26112, per=32.99%, avg=25331.20, stdev=443.21, samples=20 00:42:14.782 iops : min= 190, max= 204, avg=197.90, stdev= 3.46, samples=20 00:42:14.782 lat (msec) : 10=0.40%, 20=99.34%, 50=0.20%, 100=0.05% 00:42:14.782 cpu : usr=94.74%, sys=4.77%, ctx=18, majf=0, minf=180 00:42:14.782 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.782 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.782 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:14.782 filename0: (groupid=0, jobs=1): err= 0: pid=482144: Sat Dec 7 01:09:29 2024 00:42:14.782 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(245MiB/10044msec) 00:42:14.782 slat (nsec): min=6799, max=48308, avg=16271.72, stdev=4479.35 00:42:14.782 clat (usec): min=9466, max=50659, avg=15363.34, stdev=1488.66 00:42:14.782 lat (usec): min=9487, max=50672, avg=15379.61, stdev=1488.60 00:42:14.782 clat percentiles (usec): 00:42:14.782 | 1.00th=[12911], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:42:14.782 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:42:14.782 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16581], 95.00th=[16909], 00:42:14.783 | 99.00th=[17695], 99.50th=[18220], 99.90th=[46924], 99.95th=[50594], 00:42:14.783 | 99.99th=[50594] 00:42:14.783 bw ( KiB/s): min=24320, max=25344, per=32.57%, avg=25011.20, stdev=300.62, samples=20 00:42:14.783 iops : min= 190, max= 198, avg=195.40, stdev= 2.35, samples=20 00:42:14.783 lat (msec) : 10=0.15%, 20=99.74%, 50=0.05%, 100=0.05% 00:42:14.783 cpu : usr=94.72%, sys=4.79%, ctx=20, majf=0, minf=134 00:42:14.783 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.783 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:14.783 filename0: (groupid=0, jobs=1): err= 0: pid=482145: Sat Dec 7 01:09:29 2024 00:42:14.783 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10044msec) 00:42:14.783 slat (nsec): min=7460, max=65424, avg=21569.03, stdev=5981.69 00:42:14.783 clat (usec): min=6816, max=54789, avg=14386.25, stdev=2192.27 00:42:14.783 lat (usec): min=6824, max=54805, avg=14407.82, stdev=2192.21 00:42:14.783 clat percentiles (usec): 00:42:14.783 | 1.00th=[11994], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:42:14.783 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:42:14.783 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:42:14.783 | 99.00th=[16909], 99.50th=[17433], 99.90th=[54789], 99.95th=[54789], 00:42:14.783 | 99.99th=[54789] 00:42:14.783 bw ( KiB/s): min=24526, max=27648, per=34.77%, avg=26698.30, stdev=683.59, samples=20 00:42:14.783 iops : min= 191, max= 216, avg=208.55, stdev= 5.44, samples=20 00:42:14.783 lat (msec) : 10=0.14%, 20=99.62%, 100=0.24% 00:42:14.783 cpu : usr=90.57%, sys=6.98%, ctx=471, majf=0, minf=149 00:42:14.783 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.783 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:14.783 00:42:14.783 Run status group 0 (all jobs): 00:42:14.783 READ: bw=75.0MiB/s (78.6MB/s), 24.3MiB/s-26.0MiB/s (25.5MB/s-27.2MB/s), io=753MiB (790MB), run=10044-10044msec 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.783 00:42:14.783 real 0m11.264s 00:42:14.783 user 0m29.274s 00:42:14.783 sys 0m1.943s 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.783 01:09:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:14.783 ************************************ 00:42:14.783 END TEST fio_dif_digest 00:42:14.783 ************************************ 00:42:14.783 01:09:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:14.783 01:09:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:14.783 rmmod nvme_tcp 00:42:14.783 rmmod nvme_fabrics 00:42:14.783 rmmod nvme_keyring 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 475597 ']' 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 475597 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 475597 ']' 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 475597 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475597 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475597' 00:42:14.783 killing process with pid 475597 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@973 -- # kill 475597 00:42:14.783 01:09:29 nvmf_dif -- common/autotest_common.sh@978 -- # wait 475597 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:14.783 01:09:29 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:15.040 Waiting for block devices as requested 00:42:15.040 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:15.040 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:15.297 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:15.297 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:15.297 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:15.556 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:15.556 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:15.556 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:15.816 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:15.816 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:15.816 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:15.816 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:16.077 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:16.077 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:16.077 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:16.077 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:16.336 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:16.336 01:09:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:16.336 01:09:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:16.336 01:09:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.879 01:09:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:18.879 00:42:18.879 real 1m6.953s 00:42:18.879 user 6m33.522s 00:42:18.879 sys 0m16.091s 00:42:18.879 01:09:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:18.879 01:09:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:18.879 ************************************ 00:42:18.879 END TEST nvmf_dif 00:42:18.879 ************************************ 00:42:18.879 01:09:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:18.879 01:09:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:18.879 01:09:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:18.879 01:09:34 -- common/autotest_common.sh@10 -- # set +x 00:42:18.879 ************************************ 00:42:18.879 START TEST nvmf_abort_qd_sizes 00:42:18.879 ************************************ 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:18.879 * Looking for test storage... 00:42:18.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:18.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.879 --rc genhtml_branch_coverage=1 00:42:18.879 --rc genhtml_function_coverage=1 00:42:18.879 --rc genhtml_legend=1 00:42:18.879 --rc geninfo_all_blocks=1 00:42:18.879 --rc geninfo_unexecuted_blocks=1 00:42:18.879 00:42:18.879 ' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:18.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.879 --rc genhtml_branch_coverage=1 00:42:18.879 --rc genhtml_function_coverage=1 00:42:18.879 --rc genhtml_legend=1 00:42:18.879 --rc geninfo_all_blocks=1 00:42:18.879 --rc geninfo_unexecuted_blocks=1 00:42:18.879 00:42:18.879 ' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:18.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.879 --rc genhtml_branch_coverage=1 00:42:18.879 --rc genhtml_function_coverage=1 00:42:18.879 --rc genhtml_legend=1 00:42:18.879 --rc geninfo_all_blocks=1 00:42:18.879 --rc geninfo_unexecuted_blocks=1 00:42:18.879 00:42:18.879 ' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:18.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.879 --rc genhtml_branch_coverage=1 00:42:18.879 --rc genhtml_function_coverage=1 00:42:18.879 --rc genhtml_legend=1 00:42:18.879 --rc geninfo_all_blocks=1 00:42:18.879 --rc geninfo_unexecuted_blocks=1 00:42:18.879 00:42:18.879 ' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:18.879 01:09:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:18.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:18.880 01:09:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:20.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:20.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:20.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:20.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:20.781 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:20.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:20.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:42:20.782 00:42:20.782 --- 10.0.0.2 ping statistics --- 00:42:20.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.782 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:20.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:20.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:42:20.782 00:42:20.782 --- 10.0.0.1 ping statistics --- 00:42:20.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.782 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:20.782 01:09:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:22.160 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:22.160 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:22.160 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:23.102 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=487054 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 487054 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 487054 ']' 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.102 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.362 [2024-12-07 01:09:39.264648] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:23.362 [2024-12-07 01:09:39.264725] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.363 [2024-12-07 01:09:39.337280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:23.363 [2024-12-07 01:09:39.389037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.363 [2024-12-07 01:09:39.389108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.363 [2024-12-07 01:09:39.389122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.363 [2024-12-07 01:09:39.389138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.363 [2024-12-07 01:09:39.389148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.363 [2024-12-07 01:09:39.390649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.363 [2024-12-07 01:09:39.390712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:23.363 [2024-12-07 01:09:39.390780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:23.363 [2024-12-07 01:09:39.390783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.363 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.621 01:09:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.621 ************************************ 00:42:23.621 START TEST spdk_target_abort 00:42:23.621 ************************************ 00:42:23.621 01:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:23.621 01:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:23.621 01:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:23.621 01:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.621 01:09:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.907 spdk_targetn1 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.907 [2024-12-07 01:09:42.410627] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.907 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.908 [2024-12-07 01:09:42.450919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:26.908 01:09:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:29.488 Initializing NVMe Controllers 00:42:29.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:29.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:29.488 Initialization complete. Launching workers. 00:42:29.488 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12722, failed: 0 00:42:29.488 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1181, failed to submit 11541 00:42:29.488 success 724, unsuccessful 457, failed 0 00:42:29.488 01:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:29.488 01:09:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:32.771 Initializing NVMe Controllers 00:42:32.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:32.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:32.771 Initialization complete. Launching workers. 00:42:32.771 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8576, failed: 0 00:42:32.771 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7313 00:42:32.771 success 300, unsuccessful 963, failed 0 00:42:32.771 01:09:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:32.771 01:09:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:36.060 Initializing NVMe Controllers 00:42:36.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:36.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:36.060 Initialization complete. Launching workers. 00:42:36.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31004, failed: 0 00:42:36.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2702, failed to submit 28302 00:42:36.060 success 494, unsuccessful 2208, failed 0 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.060 01:09:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 487054 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 487054 ']' 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 487054 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487054 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487054' 00:42:37.435 killing process with pid 487054 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 487054 00:42:37.435 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 487054 00:42:37.693 00:42:37.693 real 0m14.153s 00:42:37.693 user 0m53.561s 00:42:37.693 sys 0m2.575s 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:37.693 ************************************ 00:42:37.693 END TEST spdk_target_abort 00:42:37.693 ************************************ 00:42:37.693 01:09:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:37.693 01:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:37.693 01:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.693 01:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:37.693 ************************************ 00:42:37.693 START TEST kernel_target_abort 00:42:37.693 ************************************ 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:37.693 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:37.694 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:37.694 01:09:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:39.069 Waiting for block devices as requested 00:42:39.069 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:39.069 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:39.326 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:39.326 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:39.326 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:39.583 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:39.583 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:39.583 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:39.583 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:39.841 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:39.841 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:39.841 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:39.841 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:40.099 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:40.099 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:40.099 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:40.099 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:40.357 No valid GPT data, bailing 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:40.357 00:42:40.357 Discovery Log Number of Records 2, Generation counter 2 00:42:40.357 =====Discovery Log Entry 0====== 00:42:40.357 trtype: tcp 00:42:40.357 adrfam: ipv4 00:42:40.357 subtype: current discovery subsystem 00:42:40.357 treq: not specified, sq flow control disable supported 00:42:40.357 portid: 1 00:42:40.357 trsvcid: 4420 00:42:40.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:40.357 traddr: 10.0.0.1 00:42:40.357 eflags: none 00:42:40.357 sectype: none 00:42:40.357 =====Discovery Log Entry 1====== 00:42:40.357 trtype: tcp 00:42:40.357 adrfam: ipv4 00:42:40.357 subtype: nvme subsystem 00:42:40.357 treq: not specified, sq flow control disable supported 00:42:40.357 portid: 1 00:42:40.357 trsvcid: 4420 00:42:40.357 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:40.357 traddr: 10.0.0.1 00:42:40.357 eflags: none 00:42:40.357 sectype: none 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:40.357 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:40.358 01:09:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:43.653 Initializing NVMe Controllers 00:42:43.653 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:43.653 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:43.653 Initialization complete. Launching workers. 00:42:43.653 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56154, failed: 0 00:42:43.653 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56154, failed to submit 0 00:42:43.653 success 0, unsuccessful 56154, failed 0 00:42:43.653 01:09:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:43.653 01:09:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:46.939 Initializing NVMe Controllers 00:42:46.939 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:46.939 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:46.939 Initialization complete. Launching workers. 00:42:46.939 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99281, failed: 0 00:42:46.939 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25002, failed to submit 74279 00:42:46.939 success 0, unsuccessful 25002, failed 0 00:42:46.939 01:10:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:46.939 01:10:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:50.233 Initializing NVMe Controllers 00:42:50.233 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:50.233 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:50.233 Initialization complete. Launching workers. 00:42:50.233 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97592, failed: 0 00:42:50.233 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24402, failed to submit 73190 00:42:50.233 success 0, unsuccessful 24402, failed 0 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:50.233 01:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:51.169 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:51.169 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:51.169 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:52.106 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:52.366 00:42:52.366 real 0m14.496s 00:42:52.366 user 0m6.785s 00:42:52.366 sys 0m3.247s 00:42:52.366 01:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:52.366 01:10:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:52.366 ************************************ 00:42:52.366 END TEST kernel_target_abort 00:42:52.366 ************************************ 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:52.366 rmmod nvme_tcp 00:42:52.366 rmmod nvme_fabrics 00:42:52.366 rmmod nvme_keyring 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 487054 ']' 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 487054 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 487054 ']' 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 487054 00:42:52.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (487054) - No such process 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 487054 is not found' 00:42:52.366 Process with pid 487054 is not found 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:52.366 01:10:08 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:53.742 Waiting for block devices as requested 00:42:53.742 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:53.742 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:53.742 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:53.999 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:53.999 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:53.999 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:53.999 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:54.257 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:54.257 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:54.257 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:54.257 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:54.515 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:54.515 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:54.515 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:54.515 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:54.774 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:54.774 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:54.774 01:10:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.372 01:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:57.372 00:42:57.372 real 0m38.377s 00:42:57.372 user 1m2.603s 00:42:57.372 sys 0m9.422s 00:42:57.372 01:10:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.372 01:10:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:57.372 ************************************ 00:42:57.372 END TEST nvmf_abort_qd_sizes 00:42:57.372 ************************************ 00:42:57.372 01:10:12 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:57.372 01:10:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:57.372 01:10:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.372 01:10:12 -- common/autotest_common.sh@10 -- # set +x 00:42:57.372 ************************************ 00:42:57.372 START TEST keyring_file 00:42:57.372 ************************************ 00:42:57.372 01:10:12 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:57.372 * Looking for test storage... 00:42:57.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:57.372 01:10:12 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:57.372 01:10:12 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:42:57.372 01:10:12 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:57.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.372 --rc genhtml_branch_coverage=1 00:42:57.372 --rc genhtml_function_coverage=1 00:42:57.372 --rc genhtml_legend=1 00:42:57.372 --rc geninfo_all_blocks=1 00:42:57.372 --rc geninfo_unexecuted_blocks=1 00:42:57.372 00:42:57.372 ' 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:57.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.372 --rc genhtml_branch_coverage=1 00:42:57.372 --rc genhtml_function_coverage=1 00:42:57.372 --rc genhtml_legend=1 00:42:57.372 --rc geninfo_all_blocks=1 00:42:57.372 --rc geninfo_unexecuted_blocks=1 00:42:57.372 00:42:57.372 ' 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:57.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.372 --rc genhtml_branch_coverage=1 00:42:57.372 --rc genhtml_function_coverage=1 00:42:57.372 --rc genhtml_legend=1 00:42:57.372 --rc geninfo_all_blocks=1 00:42:57.372 --rc geninfo_unexecuted_blocks=1 00:42:57.372 00:42:57.372 ' 00:42:57.372 01:10:13 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:57.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.372 --rc genhtml_branch_coverage=1 00:42:57.372 --rc genhtml_function_coverage=1 00:42:57.372 --rc genhtml_legend=1 00:42:57.372 --rc geninfo_all_blocks=1 00:42:57.372 --rc geninfo_unexecuted_blocks=1 00:42:57.372 00:42:57.372 ' 00:42:57.372 01:10:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:57.372 01:10:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:57.372 01:10:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:57.372 01:10:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:57.373 01:10:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.373 01:10:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.373 01:10:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.373 01:10:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:57.373 01:10:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:57.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8J7BgdUYsk 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8J7BgdUYsk 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8J7BgdUYsk 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8J7BgdUYsk 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ECxy78iOn7 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:57.373 01:10:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ECxy78iOn7 00:42:57.373 01:10:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ECxy78iOn7 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ECxy78iOn7 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=492815 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:57.373 01:10:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 492815 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 492815 ']' 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:57.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:57.373 01:10:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:57.373 [2024-12-07 01:10:13.224650] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:57.373 [2024-12-07 01:10:13.224754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492815 ] 00:42:57.373 [2024-12-07 01:10:13.292361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.373 [2024-12-07 01:10:13.340577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:57.632 01:10:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:57.632 [2024-12-07 01:10:13.613037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:57.632 null0 00:42:57.632 [2024-12-07 01:10:13.645090] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:57.632 [2024-12-07 01:10:13.645579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.632 01:10:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:57.632 [2024-12-07 01:10:13.669116] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:57.632 request: 00:42:57.632 { 00:42:57.632 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:57.632 "secure_channel": false, 00:42:57.632 "listen_address": { 00:42:57.632 "trtype": "tcp", 00:42:57.632 "traddr": "127.0.0.1", 00:42:57.632 "trsvcid": "4420" 00:42:57.632 }, 00:42:57.632 "method": "nvmf_subsystem_add_listener", 00:42:57.632 "req_id": 1 00:42:57.632 } 00:42:57.632 Got JSON-RPC error response 00:42:57.632 response: 00:42:57.632 { 00:42:57.632 "code": -32602, 00:42:57.632 "message": "Invalid parameters" 00:42:57.632 } 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:57.632 01:10:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=492828 00:42:57.632 01:10:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:57.632 01:10:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 492828 /var/tmp/bperf.sock 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 492828 ']' 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:57.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:57.632 01:10:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:57.632 [2024-12-07 01:10:13.717359] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:57.632 [2024-12-07 01:10:13.717421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492828 ] 00:42:57.890 [2024-12-07 01:10:13.784583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.890 [2024-12-07 01:10:13.829427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:57.890 01:10:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:57.890 01:10:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:57.890 01:10:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:42:57.890 01:10:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:42:58.148 01:10:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ECxy78iOn7 00:42:58.148 01:10:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ECxy78iOn7 00:42:58.406 01:10:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:58.406 01:10:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:58.406 01:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.406 01:10:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.406 01:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.665 01:10:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.8J7BgdUYsk == \/\t\m\p\/\t\m\p\.\8\J\7\B\g\d\U\Y\s\k ]] 00:42:58.665 01:10:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:58.665 01:10:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:58.665 01:10:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.665 01:10:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.665 01:10:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:58.924 01:10:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ECxy78iOn7 == \/\t\m\p\/\t\m\p\.\E\C\x\y\7\8\i\O\n\7 ]] 00:42:58.924 01:10:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:58.924 01:10:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:58.924 01:10:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:58.924 01:10:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.924 01:10:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.924 01:10:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.490 01:10:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:59.490 01:10:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:59.490 01:10:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:59.490 01:10:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.490 01:10:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.750 [2024-12-07 01:10:15.851771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:00.008 nvme0n1 00:43:00.008 01:10:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:00.008 01:10:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:00.008 01:10:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.008 01:10:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.008 01:10:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.008 01:10:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:00.267 01:10:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:00.267 01:10:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:00.267 01:10:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:00.267 01:10:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.267 01:10:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.267 01:10:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.267 01:10:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:00.527 01:10:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:00.528 01:10:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:00.528 Running I/O for 1 seconds... 00:43:01.728 10422.00 IOPS, 40.71 MiB/s 00:43:01.728 Latency(us) 00:43:01.728 [2024-12-07T00:10:17.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:01.728 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:01.728 nvme0n1 : 1.01 10474.03 40.91 0.00 0.00 12185.69 5364.24 19612.25 00:43:01.728 [2024-12-07T00:10:17.879Z] =================================================================================================================== 00:43:01.728 [2024-12-07T00:10:17.879Z] Total : 10474.03 40.91 0.00 0.00 12185.69 5364.24 19612.25 00:43:01.728 { 00:43:01.728 "results": [ 00:43:01.728 { 00:43:01.728 "job": "nvme0n1", 00:43:01.728 "core_mask": "0x2", 00:43:01.728 "workload": "randrw", 00:43:01.728 "percentage": 50, 00:43:01.728 "status": "finished", 00:43:01.728 "queue_depth": 128, 00:43:01.728 "io_size": 4096, 00:43:01.728 "runtime": 1.007349, 00:43:01.728 "iops": 10474.026380132407, 00:43:01.728 "mibps": 40.914165547392216, 00:43:01.728 "io_failed": 0, 00:43:01.728 "io_timeout": 0, 00:43:01.728 "avg_latency_us": 12185.68576557602, 00:43:01.728 "min_latency_us": 5364.242962962963, 00:43:01.728 "max_latency_us": 19612.254814814816 00:43:01.728 } 00:43:01.728 ], 00:43:01.728 "core_count": 1 00:43:01.728 } 00:43:01.728 01:10:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:01.728 01:10:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:02.012 01:10:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:02.012 01:10:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:02.012 01:10:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.012 01:10:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.012 01:10:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.012 01:10:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:02.303 01:10:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:02.303 01:10:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:02.303 01:10:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:02.303 01:10:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.303 01:10:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.303 01:10:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.303 01:10:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:02.576 01:10:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:02.576 01:10:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:02.576 01:10:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.576 01:10:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:02.839 [2024-12-07 01:10:18.725437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:02.839 [2024-12-07 01:10:18.725965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa94c20 (107): Transport endpoint is not connected 00:43:02.839 [2024-12-07 01:10:18.726957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa94c20 (9): Bad file descriptor 00:43:02.839 [2024-12-07 01:10:18.727956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:02.839 [2024-12-07 01:10:18.727990] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:02.839 [2024-12-07 01:10:18.728021] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:02.839 [2024-12-07 01:10:18.728046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:02.839 request: 00:43:02.839 { 00:43:02.839 "name": "nvme0", 00:43:02.839 "trtype": "tcp", 00:43:02.839 "traddr": "127.0.0.1", 00:43:02.839 "adrfam": "ipv4", 00:43:02.839 "trsvcid": "4420", 00:43:02.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.839 "prchk_reftag": false, 00:43:02.839 "prchk_guard": false, 00:43:02.839 "hdgst": false, 00:43:02.839 "ddgst": false, 00:43:02.839 "psk": "key1", 00:43:02.839 "allow_unrecognized_csi": false, 00:43:02.839 "method": "bdev_nvme_attach_controller", 00:43:02.839 "req_id": 1 00:43:02.839 } 00:43:02.839 Got JSON-RPC error response 00:43:02.839 response: 00:43:02.839 { 00:43:02.839 "code": -5, 00:43:02.839 "message": "Input/output error" 00:43:02.839 } 00:43:02.839 01:10:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:02.839 01:10:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:02.839 01:10:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:02.839 01:10:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:02.839 01:10:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:02.839 01:10:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:02.839 01:10:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.839 01:10:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.839 01:10:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:02.839 01:10:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.098 01:10:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:03.098 01:10:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:03.098 01:10:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:03.098 01:10:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.098 01:10:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.098 01:10:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.098 01:10:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:03.356 01:10:19 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:03.356 01:10:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:03.356 01:10:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:03.614 01:10:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:03.614 01:10:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:03.872 01:10:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:03.872 01:10:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:03.872 01:10:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.130 01:10:20 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:04.130 01:10:20 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.8J7BgdUYsk 00:43:04.130 01:10:20 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.130 01:10:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.130 01:10:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.388 [2024-12-07 01:10:20.373834] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8J7BgdUYsk': 0100660 00:43:04.388 [2024-12-07 01:10:20.373868] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:04.388 request: 00:43:04.388 { 00:43:04.388 "name": "key0", 00:43:04.388 "path": "/tmp/tmp.8J7BgdUYsk", 00:43:04.388 "method": "keyring_file_add_key", 00:43:04.388 "req_id": 1 00:43:04.388 } 00:43:04.388 Got JSON-RPC error response 00:43:04.388 response: 00:43:04.388 { 00:43:04.388 "code": -1, 00:43:04.388 "message": "Operation not permitted" 00:43:04.388 } 00:43:04.388 01:10:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:04.388 01:10:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:04.388 01:10:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:04.388 01:10:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:04.388 01:10:20 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.8J7BgdUYsk 00:43:04.388 01:10:20 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.388 01:10:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8J7BgdUYsk 00:43:04.646 01:10:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.8J7BgdUYsk 00:43:04.646 01:10:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:04.646 01:10:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:04.646 01:10:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:04.646 01:10:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:04.646 01:10:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:04.646 01:10:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:04.904 01:10:20 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:04.904 01:10:20 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.904 01:10:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:04.904 01:10:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:05.162 [2024-12-07 01:10:21.232203] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8J7BgdUYsk': No such file or directory 00:43:05.162 [2024-12-07 01:10:21.232236] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:05.162 [2024-12-07 01:10:21.232268] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:05.162 [2024-12-07 01:10:21.232282] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:05.162 [2024-12-07 01:10:21.232295] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:05.162 [2024-12-07 01:10:21.232320] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:05.162 request: 00:43:05.162 { 00:43:05.162 "name": "nvme0", 00:43:05.162 "trtype": "tcp", 00:43:05.162 "traddr": "127.0.0.1", 00:43:05.162 "adrfam": "ipv4", 00:43:05.162 "trsvcid": "4420", 00:43:05.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.162 "prchk_reftag": false, 00:43:05.162 "prchk_guard": false, 00:43:05.162 "hdgst": false, 00:43:05.162 "ddgst": false, 00:43:05.162 "psk": "key0", 00:43:05.162 "allow_unrecognized_csi": false, 00:43:05.162 "method": "bdev_nvme_attach_controller", 00:43:05.162 "req_id": 1 00:43:05.162 } 00:43:05.162 Got JSON-RPC error response 00:43:05.162 response: 00:43:05.162 { 00:43:05.162 "code": -19, 00:43:05.162 "message": "No such device" 00:43:05.162 } 00:43:05.162 01:10:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:05.162 01:10:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:05.162 01:10:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:05.162 01:10:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:05.162 01:10:21 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:05.162 01:10:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:05.420 01:10:21 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oEqlk3Z0mz 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:05.420 01:10:21 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:05.420 01:10:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oEqlk3Z0mz 00:43:05.679 01:10:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oEqlk3Z0mz 00:43:05.679 01:10:21 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.oEqlk3Z0mz 00:43:05.679 01:10:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oEqlk3Z0mz 00:43:05.679 01:10:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oEqlk3Z0mz 00:43:05.937 01:10:21 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:05.937 01:10:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:06.195 nvme0n1 00:43:06.195 01:10:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:06.195 01:10:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:06.195 01:10:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.195 01:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.195 01:10:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.195 01:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.454 01:10:22 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:06.454 01:10:22 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:06.454 01:10:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:06.712 01:10:22 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:06.712 01:10:22 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:06.712 01:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.712 01:10:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.712 01:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.970 01:10:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:06.970 01:10:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:06.970 01:10:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:06.970 01:10:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.970 01:10:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.970 01:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.970 01:10:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:07.229 01:10:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:07.229 01:10:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:07.229 01:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:07.487 01:10:23 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:07.487 01:10:23 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:07.487 01:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.746 01:10:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:07.746 01:10:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oEqlk3Z0mz 00:43:07.746 01:10:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oEqlk3Z0mz 00:43:08.004 01:10:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ECxy78iOn7 00:43:08.004 01:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ECxy78iOn7 00:43:08.263 01:10:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:08.263 01:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:08.829 nvme0n1 00:43:08.830 01:10:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:08.830 01:10:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:09.092 01:10:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:09.092 "subsystems": [ 00:43:09.092 { 00:43:09.092 "subsystem": "keyring", 00:43:09.092 "config": [ 00:43:09.092 { 00:43:09.092 "method": "keyring_file_add_key", 00:43:09.092 "params": { 00:43:09.092 "name": "key0", 00:43:09.092 "path": "/tmp/tmp.oEqlk3Z0mz" 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "keyring_file_add_key", 00:43:09.092 "params": { 00:43:09.092 "name": "key1", 00:43:09.092 "path": "/tmp/tmp.ECxy78iOn7" 00:43:09.092 } 00:43:09.092 } 00:43:09.092 ] 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "subsystem": "iobuf", 00:43:09.092 "config": [ 00:43:09.092 { 00:43:09.092 "method": "iobuf_set_options", 00:43:09.092 "params": { 00:43:09.092 "small_pool_count": 8192, 00:43:09.092 "large_pool_count": 1024, 00:43:09.092 "small_bufsize": 8192, 00:43:09.092 "large_bufsize": 135168, 00:43:09.092 "enable_numa": false 00:43:09.092 } 00:43:09.092 } 00:43:09.092 ] 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "subsystem": "sock", 00:43:09.092 "config": [ 00:43:09.092 { 00:43:09.092 "method": "sock_set_default_impl", 00:43:09.092 "params": { 00:43:09.092 "impl_name": "posix" 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "sock_impl_set_options", 00:43:09.092 "params": { 00:43:09.092 "impl_name": "ssl", 00:43:09.092 "recv_buf_size": 4096, 00:43:09.092 "send_buf_size": 4096, 00:43:09.092 "enable_recv_pipe": true, 00:43:09.092 "enable_quickack": false, 00:43:09.092 "enable_placement_id": 0, 00:43:09.092 "enable_zerocopy_send_server": true, 00:43:09.092 "enable_zerocopy_send_client": false, 00:43:09.092 "zerocopy_threshold": 0, 00:43:09.092 "tls_version": 0, 00:43:09.092 "enable_ktls": false 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "sock_impl_set_options", 00:43:09.092 "params": { 00:43:09.092 "impl_name": "posix", 00:43:09.092 "recv_buf_size": 2097152, 00:43:09.092 "send_buf_size": 2097152, 00:43:09.092 "enable_recv_pipe": true, 00:43:09.092 "enable_quickack": false, 00:43:09.092 "enable_placement_id": 0, 00:43:09.092 "enable_zerocopy_send_server": true, 00:43:09.092 "enable_zerocopy_send_client": false, 00:43:09.092 "zerocopy_threshold": 0, 00:43:09.092 "tls_version": 0, 00:43:09.092 "enable_ktls": false 00:43:09.092 } 00:43:09.092 } 00:43:09.092 ] 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "subsystem": "vmd", 00:43:09.092 "config": [] 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "subsystem": "accel", 00:43:09.092 "config": [ 00:43:09.092 { 00:43:09.092 "method": "accel_set_options", 00:43:09.092 "params": { 00:43:09.092 "small_cache_size": 128, 00:43:09.092 "large_cache_size": 16, 00:43:09.092 "task_count": 2048, 00:43:09.092 "sequence_count": 2048, 00:43:09.092 "buf_count": 2048 00:43:09.092 } 00:43:09.092 } 00:43:09.092 ] 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "subsystem": "bdev", 00:43:09.092 "config": [ 00:43:09.092 { 00:43:09.092 "method": "bdev_set_options", 00:43:09.092 "params": { 00:43:09.092 "bdev_io_pool_size": 65535, 00:43:09.092 "bdev_io_cache_size": 256, 00:43:09.092 "bdev_auto_examine": true, 00:43:09.092 "iobuf_small_cache_size": 128, 00:43:09.092 "iobuf_large_cache_size": 16 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "bdev_raid_set_options", 00:43:09.092 "params": { 00:43:09.092 "process_window_size_kb": 1024, 00:43:09.092 "process_max_bandwidth_mb_sec": 0 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "bdev_iscsi_set_options", 00:43:09.092 "params": { 00:43:09.092 "timeout_sec": 30 00:43:09.092 } 00:43:09.092 }, 00:43:09.092 { 00:43:09.092 "method": "bdev_nvme_set_options", 00:43:09.092 "params": { 00:43:09.092 "action_on_timeout": "none", 00:43:09.092 "timeout_us": 0, 00:43:09.092 "timeout_admin_us": 0, 00:43:09.092 "keep_alive_timeout_ms": 10000, 00:43:09.092 "arbitration_burst": 0, 00:43:09.092 "low_priority_weight": 0, 00:43:09.092 "medium_priority_weight": 0, 00:43:09.092 "high_priority_weight": 0, 00:43:09.092 "nvme_adminq_poll_period_us": 10000, 00:43:09.092 "nvme_ioq_poll_period_us": 0, 00:43:09.092 "io_queue_requests": 512, 00:43:09.093 "delay_cmd_submit": true, 00:43:09.093 "transport_retry_count": 4, 00:43:09.093 "bdev_retry_count": 3, 00:43:09.093 "transport_ack_timeout": 0, 00:43:09.093 "ctrlr_loss_timeout_sec": 0, 00:43:09.093 "reconnect_delay_sec": 0, 00:43:09.093 "fast_io_fail_timeout_sec": 0, 00:43:09.093 "disable_auto_failback": false, 00:43:09.093 "generate_uuids": false, 00:43:09.093 "transport_tos": 0, 00:43:09.093 "nvme_error_stat": false, 00:43:09.093 "rdma_srq_size": 0, 00:43:09.093 "io_path_stat": false, 00:43:09.093 "allow_accel_sequence": false, 00:43:09.093 "rdma_max_cq_size": 0, 00:43:09.093 "rdma_cm_event_timeout_ms": 0, 00:43:09.093 "dhchap_digests": [ 00:43:09.093 "sha256", 00:43:09.093 "sha384", 00:43:09.093 "sha512" 00:43:09.093 ], 00:43:09.093 "dhchap_dhgroups": [ 00:43:09.093 "null", 00:43:09.093 "ffdhe2048", 00:43:09.093 "ffdhe3072", 00:43:09.093 "ffdhe4096", 00:43:09.093 "ffdhe6144", 00:43:09.093 "ffdhe8192" 00:43:09.093 ] 00:43:09.093 } 00:43:09.093 }, 00:43:09.093 { 00:43:09.093 "method": "bdev_nvme_attach_controller", 00:43:09.093 "params": { 00:43:09.093 "name": "nvme0", 00:43:09.093 "trtype": "TCP", 00:43:09.093 "adrfam": "IPv4", 00:43:09.093 "traddr": "127.0.0.1", 00:43:09.093 "trsvcid": "4420", 00:43:09.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:09.093 "prchk_reftag": false, 00:43:09.093 "prchk_guard": false, 00:43:09.093 "ctrlr_loss_timeout_sec": 0, 00:43:09.093 "reconnect_delay_sec": 0, 00:43:09.093 "fast_io_fail_timeout_sec": 0, 00:43:09.093 "psk": "key0", 00:43:09.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:09.093 "hdgst": false, 00:43:09.093 "ddgst": false, 00:43:09.093 "multipath": "multipath" 00:43:09.093 } 00:43:09.093 }, 00:43:09.093 { 00:43:09.093 "method": "bdev_nvme_set_hotplug", 00:43:09.093 "params": { 00:43:09.093 "period_us": 100000, 00:43:09.093 "enable": false 00:43:09.093 } 00:43:09.093 }, 00:43:09.093 { 00:43:09.093 "method": "bdev_wait_for_examine" 00:43:09.093 } 00:43:09.093 ] 00:43:09.093 }, 00:43:09.093 { 00:43:09.093 "subsystem": "nbd", 00:43:09.093 "config": [] 00:43:09.093 } 00:43:09.093 ] 00:43:09.093 }' 00:43:09.093 01:10:25 keyring_file -- keyring/file.sh@115 -- # killprocess 492828 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 492828 ']' 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 492828 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492828 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492828' 00:43:09.093 killing process with pid 492828 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@973 -- # kill 492828 00:43:09.093 Received shutdown signal, test time was about 1.000000 seconds 00:43:09.093 00:43:09.093 Latency(us) 00:43:09.093 [2024-12-07T00:10:25.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.093 [2024-12-07T00:10:25.244Z] =================================================================================================================== 00:43:09.093 [2024-12-07T00:10:25.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:09.093 01:10:25 keyring_file -- common/autotest_common.sh@978 -- # wait 492828 00:43:09.353 01:10:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=494302 00:43:09.353 01:10:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 494302 /var/tmp/bperf.sock 00:43:09.353 01:10:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 494302 ']' 00:43:09.353 01:10:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:09.353 01:10:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:09.353 01:10:25 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:09.353 01:10:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:09.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:09.353 01:10:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:09.353 "subsystems": [ 00:43:09.353 { 00:43:09.353 "subsystem": "keyring", 00:43:09.353 "config": [ 00:43:09.353 { 00:43:09.353 "method": "keyring_file_add_key", 00:43:09.353 "params": { 00:43:09.353 "name": "key0", 00:43:09.353 "path": "/tmp/tmp.oEqlk3Z0mz" 00:43:09.353 } 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "method": "keyring_file_add_key", 00:43:09.353 "params": { 00:43:09.353 "name": "key1", 00:43:09.353 "path": "/tmp/tmp.ECxy78iOn7" 00:43:09.353 } 00:43:09.353 } 00:43:09.353 ] 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "subsystem": "iobuf", 00:43:09.353 "config": [ 00:43:09.353 { 00:43:09.353 "method": "iobuf_set_options", 00:43:09.353 "params": { 00:43:09.353 "small_pool_count": 8192, 00:43:09.353 "large_pool_count": 1024, 00:43:09.353 "small_bufsize": 8192, 00:43:09.353 "large_bufsize": 135168, 00:43:09.353 "enable_numa": false 00:43:09.353 } 00:43:09.353 } 00:43:09.353 ] 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "subsystem": "sock", 00:43:09.353 "config": [ 00:43:09.353 { 00:43:09.353 "method": "sock_set_default_impl", 00:43:09.353 "params": { 00:43:09.353 "impl_name": "posix" 00:43:09.353 } 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "method": "sock_impl_set_options", 00:43:09.353 "params": { 00:43:09.353 "impl_name": "ssl", 00:43:09.353 "recv_buf_size": 4096, 00:43:09.353 "send_buf_size": 4096, 00:43:09.353 "enable_recv_pipe": true, 00:43:09.353 "enable_quickack": false, 00:43:09.353 "enable_placement_id": 0, 00:43:09.353 "enable_zerocopy_send_server": true, 00:43:09.353 "enable_zerocopy_send_client": false, 00:43:09.353 "zerocopy_threshold": 0, 00:43:09.353 "tls_version": 0, 00:43:09.353 "enable_ktls": false 00:43:09.353 } 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "method": "sock_impl_set_options", 00:43:09.353 "params": { 00:43:09.353 "impl_name": "posix", 00:43:09.353 "recv_buf_size": 2097152, 00:43:09.353 "send_buf_size": 2097152, 00:43:09.353 "enable_recv_pipe": true, 00:43:09.353 "enable_quickack": false, 00:43:09.353 "enable_placement_id": 0, 00:43:09.353 "enable_zerocopy_send_server": true, 00:43:09.353 "enable_zerocopy_send_client": false, 00:43:09.353 "zerocopy_threshold": 0, 00:43:09.353 "tls_version": 0, 00:43:09.353 "enable_ktls": false 00:43:09.353 } 00:43:09.353 } 00:43:09.353 ] 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "subsystem": "vmd", 00:43:09.353 "config": [] 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "subsystem": "accel", 00:43:09.353 "config": [ 00:43:09.353 { 00:43:09.353 "method": "accel_set_options", 00:43:09.353 "params": { 00:43:09.353 "small_cache_size": 128, 00:43:09.353 "large_cache_size": 16, 00:43:09.353 "task_count": 2048, 00:43:09.353 "sequence_count": 2048, 00:43:09.353 "buf_count": 2048 00:43:09.353 } 00:43:09.353 } 00:43:09.353 ] 00:43:09.353 }, 00:43:09.353 { 00:43:09.353 "subsystem": "bdev", 00:43:09.353 "config": [ 00:43:09.353 { 00:43:09.353 "method": "bdev_set_options", 00:43:09.354 "params": { 00:43:09.354 "bdev_io_pool_size": 65535, 00:43:09.354 "bdev_io_cache_size": 256, 00:43:09.354 "bdev_auto_examine": true, 00:43:09.354 "iobuf_small_cache_size": 128, 00:43:09.354 "iobuf_large_cache_size": 16 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_raid_set_options", 00:43:09.354 "params": { 00:43:09.354 "process_window_size_kb": 1024, 00:43:09.354 "process_max_bandwidth_mb_sec": 0 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_iscsi_set_options", 00:43:09.354 "params": { 00:43:09.354 "timeout_sec": 30 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_nvme_set_options", 00:43:09.354 "params": { 00:43:09.354 "action_on_timeout": "none", 00:43:09.354 "timeout_us": 0, 00:43:09.354 "timeout_admin_us": 0, 00:43:09.354 "keep_alive_timeout_ms": 10000, 00:43:09.354 "arbitration_burst": 0, 00:43:09.354 "low_priority_weight": 0, 00:43:09.354 "medium_priority_weight": 0, 00:43:09.354 "high_priority_weight": 0, 00:43:09.354 "nvme_adminq_poll_period_us": 10000, 00:43:09.354 "nvme_ioq_poll_period_us": 0, 00:43:09.354 "io_queue_requests": 512, 00:43:09.354 "delay_cmd_submit": true, 00:43:09.354 "transport_retry_count": 4, 00:43:09.354 "bdev_retry_count": 3, 00:43:09.354 "transport_ack_timeout": 0, 00:43:09.354 "ctrlr_loss_timeout_sec": 0, 00:43:09.354 "reconnect_delay_sec": 0, 00:43:09.354 "fast_io_fail_timeout_sec": 0, 00:43:09.354 "disable_auto_failback": false, 00:43:09.354 "generate_uuids": false, 00:43:09.354 "transport_tos": 0, 00:43:09.354 "nvme_error_stat": false, 00:43:09.354 "rdma_srq_size": 0, 00:43:09.354 01:10:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:09.354 "io_path_stat": false, 00:43:09.354 "allow_accel_sequence": false, 00:43:09.354 "rdma_max_cq_size": 0, 00:43:09.354 "rdma_cm_event_timeout_ms": 0, 00:43:09.354 "dhchap_digests": [ 00:43:09.354 "sha256", 00:43:09.354 "sha384", 00:43:09.354 "sha512" 00:43:09.354 ], 00:43:09.354 "dhchap_dhgroups": [ 00:43:09.354 "null", 00:43:09.354 "ffdhe2048", 00:43:09.354 "ffdhe3072", 00:43:09.354 "ffdhe4096", 00:43:09.354 "ffdhe6144", 00:43:09.354 "ffdhe8192" 00:43:09.354 ] 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_nvme_attach_controller", 00:43:09.354 "params": { 00:43:09.354 "name": "nvme0", 00:43:09.354 "trtype": "TCP", 00:43:09.354 "adrfam": "IPv4", 00:43:09.354 "traddr": "127.0.0.1", 00:43:09.354 "trsvcid": "4420", 00:43:09.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:09.354 "prchk_reftag": false, 00:43:09.354 "prchk_guard": false, 00:43:09.354 "ctrlr_loss_timeout_sec": 0, 00:43:09.354 "reconnect_delay_sec": 0, 00:43:09.354 "fast_io_fail_timeout_sec": 0, 00:43:09.354 "psk": "key0", 00:43:09.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:09.354 "hdgst": false, 00:43:09.354 "ddgst": false, 00:43:09.354 "multipath": "multipath" 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_nvme_set_hotplug", 00:43:09.354 "params": { 00:43:09.354 "period_us": 100000, 00:43:09.354 "enable": false 00:43:09.354 } 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "method": "bdev_wait_for_examine" 00:43:09.354 } 00:43:09.354 ] 00:43:09.354 }, 00:43:09.354 { 00:43:09.354 "subsystem": "nbd", 00:43:09.354 "config": [] 00:43:09.354 } 00:43:09.354 ] 00:43:09.354 }' 00:43:09.354 01:10:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:09.354 [2024-12-07 01:10:25.346480] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:09.354 [2024-12-07 01:10:25.346572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494302 ] 00:43:09.354 [2024-12-07 01:10:25.418442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:09.354 [2024-12-07 01:10:25.467104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.613 [2024-12-07 01:10:25.656301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:09.871 01:10:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:09.871 01:10:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:09.871 01:10:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:09.871 01:10:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.871 01:10:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:10.128 01:10:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:10.128 01:10:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:10.128 01:10:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:10.128 01:10:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.128 01:10:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.128 01:10:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.128 01:10:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:10.385 01:10:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:10.385 01:10:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:10.385 01:10:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:10.385 01:10:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.385 01:10:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.385 01:10:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.385 01:10:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:10.642 01:10:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:10.642 01:10:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:10.642 01:10:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:10.642 01:10:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:10.900 01:10:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:10.900 01:10:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:10.900 01:10:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oEqlk3Z0mz /tmp/tmp.ECxy78iOn7 00:43:10.900 01:10:26 keyring_file -- keyring/file.sh@20 -- # killprocess 494302 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 494302 ']' 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 494302 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 494302 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 494302' 00:43:10.900 killing process with pid 494302 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@973 -- # kill 494302 00:43:10.900 Received shutdown signal, test time was about 1.000000 seconds 00:43:10.900 00:43:10.900 Latency(us) 00:43:10.900 [2024-12-07T00:10:27.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:10.900 [2024-12-07T00:10:27.051Z] =================================================================================================================== 00:43:10.900 [2024-12-07T00:10:27.051Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:10.900 01:10:26 keyring_file -- common/autotest_common.sh@978 -- # wait 494302 00:43:11.158 01:10:27 keyring_file -- keyring/file.sh@21 -- # killprocess 492815 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 492815 ']' 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 492815 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492815 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492815' 00:43:11.159 killing process with pid 492815 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@973 -- # kill 492815 00:43:11.159 01:10:27 keyring_file -- common/autotest_common.sh@978 -- # wait 492815 00:43:11.417 00:43:11.417 real 0m14.551s 00:43:11.417 user 0m37.160s 00:43:11.417 sys 0m3.268s 00:43:11.417 01:10:27 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:11.417 01:10:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:11.417 ************************************ 00:43:11.417 END TEST keyring_file 00:43:11.417 ************************************ 00:43:11.417 01:10:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:11.417 01:10:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:11.417 01:10:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:11.417 01:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:11.417 01:10:27 -- common/autotest_common.sh@10 -- # set +x 00:43:11.417 ************************************ 00:43:11.417 START TEST keyring_linux 00:43:11.417 ************************************ 00:43:11.417 01:10:27 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:11.417 Joined session keyring: 180263563 00:43:11.675 * Looking for test storage... 00:43:11.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.675 --rc genhtml_branch_coverage=1 00:43:11.675 --rc genhtml_function_coverage=1 00:43:11.675 --rc genhtml_legend=1 00:43:11.675 --rc geninfo_all_blocks=1 00:43:11.675 --rc geninfo_unexecuted_blocks=1 00:43:11.675 00:43:11.675 ' 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.675 --rc genhtml_branch_coverage=1 00:43:11.675 --rc genhtml_function_coverage=1 00:43:11.675 --rc genhtml_legend=1 00:43:11.675 --rc geninfo_all_blocks=1 00:43:11.675 --rc geninfo_unexecuted_blocks=1 00:43:11.675 00:43:11.675 ' 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.675 --rc genhtml_branch_coverage=1 00:43:11.675 --rc genhtml_function_coverage=1 00:43:11.675 --rc genhtml_legend=1 00:43:11.675 --rc geninfo_all_blocks=1 00:43:11.675 --rc geninfo_unexecuted_blocks=1 00:43:11.675 00:43:11.675 ' 00:43:11.675 01:10:27 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.675 --rc genhtml_branch_coverage=1 00:43:11.675 --rc genhtml_function_coverage=1 00:43:11.675 --rc genhtml_legend=1 00:43:11.675 --rc geninfo_all_blocks=1 00:43:11.675 --rc geninfo_unexecuted_blocks=1 00:43:11.675 00:43:11.675 ' 00:43:11.675 01:10:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:11.675 01:10:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:11.675 01:10:27 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:11.675 01:10:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:11.676 01:10:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:11.676 01:10:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:11.676 01:10:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:11.676 01:10:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.676 01:10:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.676 01:10:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.676 01:10:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:11.676 01:10:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:11.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:11.676 /tmp/:spdk-test:key0 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:11.676 01:10:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:11.676 01:10:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:11.676 /tmp/:spdk-test:key1 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=494779 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:11.676 01:10:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 494779 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 494779 ']' 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:11.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:11.676 01:10:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:11.936 [2024-12-07 01:10:27.862861] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:11.936 [2024-12-07 01:10:27.862966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494779 ] 00:43:11.936 [2024-12-07 01:10:27.927678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:11.936 [2024-12-07 01:10:27.972377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:12.194 [2024-12-07 01:10:28.224874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:12.194 null0 00:43:12.194 [2024-12-07 01:10:28.256917] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:12.194 [2024-12-07 01:10:28.257458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:12.194 829469040 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:12.194 73636502 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=494790 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 494790 /var/tmp/bperf.sock 00:43:12.194 01:10:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 494790 ']' 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:12.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:12.194 01:10:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:12.194 [2024-12-07 01:10:28.326912] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:12.194 [2024-12-07 01:10:28.326991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494790 ] 00:43:12.452 [2024-12-07 01:10:28.394438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.452 [2024-12-07 01:10:28.440572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:12.452 01:10:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:12.452 01:10:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:12.452 01:10:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:12.452 01:10:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:12.710 01:10:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:12.710 01:10:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:13.279 01:10:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:13.279 01:10:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:13.538 [2024-12-07 01:10:29.440455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:13.538 nvme0n1 00:43:13.538 01:10:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:13.538 01:10:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:13.538 01:10:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:13.538 01:10:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:13.538 01:10:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:13.538 01:10:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.797 01:10:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:13.797 01:10:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:13.797 01:10:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:13.797 01:10:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:13.797 01:10:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.797 01:10:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.797 01:10:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@25 -- # sn=829469040 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 829469040 == \8\2\9\4\6\9\0\4\0 ]] 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 829469040 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:14.056 01:10:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:14.316 Running I/O for 1 seconds... 00:43:15.252 11310.00 IOPS, 44.18 MiB/s 00:43:15.252 Latency(us) 00:43:15.252 [2024-12-07T00:10:31.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:15.252 nvme0n1 : 1.01 11316.81 44.21 0.00 0.00 11241.80 3301.07 14854.83 00:43:15.252 [2024-12-07T00:10:31.403Z] =================================================================================================================== 00:43:15.252 [2024-12-07T00:10:31.403Z] Total : 11316.81 44.21 0.00 0.00 11241.80 3301.07 14854.83 00:43:15.252 { 00:43:15.252 "results": [ 00:43:15.252 { 00:43:15.252 "job": "nvme0n1", 00:43:15.252 "core_mask": "0x2", 00:43:15.252 "workload": "randread", 00:43:15.252 "status": "finished", 00:43:15.252 "queue_depth": 128, 00:43:15.252 "io_size": 4096, 00:43:15.252 "runtime": 1.010797, 00:43:15.252 "iops": 11316.812376768035, 00:43:15.252 "mibps": 44.20629834675014, 00:43:15.252 "io_failed": 0, 00:43:15.252 "io_timeout": 0, 00:43:15.252 "avg_latency_us": 11241.801712529908, 00:43:15.252 "min_latency_us": 3301.0725925925926, 00:43:15.252 "max_latency_us": 14854.826666666666 00:43:15.252 } 00:43:15.252 ], 00:43:15.252 "core_count": 1 00:43:15.252 } 00:43:15.252 01:10:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:15.252 01:10:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:15.510 01:10:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:15.511 01:10:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:15.511 01:10:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:15.511 01:10:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:15.511 01:10:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:15.511 01:10:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:15.769 01:10:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:15.769 01:10:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:15.769 01:10:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:15.769 01:10:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:15.769 01:10:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:15.769 01:10:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:16.028 [2024-12-07 01:10:32.031434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:16.028 [2024-12-07 01:10:32.031933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ba930 (107): Transport endpoint is not connected 00:43:16.028 [2024-12-07 01:10:32.032925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ba930 (9): Bad file descriptor 00:43:16.028 [2024-12-07 01:10:32.033925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:16.028 [2024-12-07 01:10:32.033943] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:16.028 [2024-12-07 01:10:32.033972] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:16.028 [2024-12-07 01:10:32.033986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:16.028 request: 00:43:16.028 { 00:43:16.028 "name": "nvme0", 00:43:16.028 "trtype": "tcp", 00:43:16.028 "traddr": "127.0.0.1", 00:43:16.028 "adrfam": "ipv4", 00:43:16.028 "trsvcid": "4420", 00:43:16.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:16.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:16.028 "prchk_reftag": false, 00:43:16.028 "prchk_guard": false, 00:43:16.028 "hdgst": false, 00:43:16.028 "ddgst": false, 00:43:16.028 "psk": ":spdk-test:key1", 00:43:16.028 "allow_unrecognized_csi": false, 00:43:16.028 "method": "bdev_nvme_attach_controller", 00:43:16.028 "req_id": 1 00:43:16.028 } 00:43:16.028 Got JSON-RPC error response 00:43:16.028 response: 00:43:16.028 { 00:43:16.028 "code": -5, 00:43:16.028 "message": "Input/output error" 00:43:16.029 } 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@33 -- # sn=829469040 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 829469040 00:43:16.029 1 links removed 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@33 -- # sn=73636502 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 73636502 00:43:16.029 1 links removed 00:43:16.029 01:10:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 494790 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 494790 ']' 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 494790 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 494790 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 494790' 00:43:16.029 killing process with pid 494790 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 494790 00:43:16.029 Received shutdown signal, test time was about 1.000000 seconds 00:43:16.029 00:43:16.029 Latency(us) 00:43:16.029 [2024-12-07T00:10:32.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:16.029 [2024-12-07T00:10:32.180Z] =================================================================================================================== 00:43:16.029 [2024-12-07T00:10:32.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:16.029 01:10:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 494790 00:43:16.288 01:10:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 494779 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 494779 ']' 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 494779 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 494779 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 494779' 00:43:16.288 killing process with pid 494779 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 494779 00:43:16.288 01:10:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 494779 00:43:16.857 00:43:16.857 real 0m5.181s 00:43:16.857 user 0m10.252s 00:43:16.857 sys 0m1.675s 00:43:16.857 01:10:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:16.857 01:10:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.857 ************************************ 00:43:16.857 END TEST keyring_linux 00:43:16.857 ************************************ 00:43:16.857 01:10:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:16.857 01:10:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:16.857 01:10:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:16.857 01:10:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:16.857 01:10:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:16.857 01:10:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:16.857 01:10:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:16.857 01:10:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:16.857 01:10:32 -- common/autotest_common.sh@10 -- # set +x 00:43:16.857 01:10:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:16.857 01:10:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:16.857 01:10:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:16.857 01:10:32 -- common/autotest_common.sh@10 -- # set +x 00:43:18.756 INFO: APP EXITING 00:43:18.756 INFO: killing all VMs 00:43:18.756 INFO: killing vhost app 00:43:18.756 INFO: EXIT DONE 00:43:20.132 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:20.132 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:20.132 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:20.132 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:20.132 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:20.132 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:20.132 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:20.132 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:20.132 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:20.132 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:20.132 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:20.132 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:20.132 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:20.132 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:20.132 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:20.132 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:20.132 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:21.512 Cleaning 00:43:21.512 Removing: /var/run/dpdk/spdk0/config 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:21.512 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:21.512 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:21.512 Removing: /var/run/dpdk/spdk1/config 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:21.512 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:21.512 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:21.512 Removing: /var/run/dpdk/spdk2/config 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:21.512 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:21.512 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:21.512 Removing: /var/run/dpdk/spdk3/config 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:21.512 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:21.512 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:21.512 Removing: /var/run/dpdk/spdk4/config 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:21.512 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:21.512 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:21.512 Removing: /dev/shm/bdev_svc_trace.1 00:43:21.512 Removing: /dev/shm/nvmf_trace.0 00:43:21.512 Removing: /dev/shm/spdk_tgt_trace.pid110986 00:43:21.512 Removing: /var/run/dpdk/spdk0 00:43:21.512 Removing: /var/run/dpdk/spdk1 00:43:21.512 Removing: /var/run/dpdk/spdk2 00:43:21.512 Removing: /var/run/dpdk/spdk3 00:43:21.512 Removing: /var/run/dpdk/spdk4 00:43:21.512 Removing: /var/run/dpdk/spdk_pid109327 00:43:21.512 Removing: /var/run/dpdk/spdk_pid110067 00:43:21.512 Removing: /var/run/dpdk/spdk_pid110986 00:43:21.512 Removing: /var/run/dpdk/spdk_pid111342 00:43:21.512 Removing: /var/run/dpdk/spdk_pid112034 00:43:21.512 Removing: /var/run/dpdk/spdk_pid112172 00:43:21.512 Removing: /var/run/dpdk/spdk_pid112882 00:43:21.512 Removing: /var/run/dpdk/spdk_pid112904 00:43:21.512 Removing: /var/run/dpdk/spdk_pid113164 00:43:21.512 Removing: /var/run/dpdk/spdk_pid114475 00:43:21.512 Removing: /var/run/dpdk/spdk_pid115397 00:43:21.512 Removing: /var/run/dpdk/spdk_pid115600 00:43:21.512 Removing: /var/run/dpdk/spdk_pid115913 00:43:21.512 Removing: /var/run/dpdk/spdk_pid116123 00:43:21.512 Removing: /var/run/dpdk/spdk_pid116325 00:43:21.512 Removing: /var/run/dpdk/spdk_pid116480 00:43:21.512 Removing: /var/run/dpdk/spdk_pid116634 00:43:21.513 Removing: /var/run/dpdk/spdk_pid116820 00:43:21.513 Removing: /var/run/dpdk/spdk_pid117267 00:43:21.513 Removing: /var/run/dpdk/spdk_pid119761 00:43:21.513 Removing: /var/run/dpdk/spdk_pid119925 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120085 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120099 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120514 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120532 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120831 00:43:21.513 Removing: /var/run/dpdk/spdk_pid120887 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121129 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121140 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121330 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121433 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121811 00:43:21.513 Removing: /var/run/dpdk/spdk_pid121963 00:43:21.513 Removing: /var/run/dpdk/spdk_pid122283 00:43:21.513 Removing: /var/run/dpdk/spdk_pid124400 00:43:21.513 Removing: /var/run/dpdk/spdk_pid127031 00:43:21.513 Removing: /var/run/dpdk/spdk_pid134039 00:43:21.513 Removing: /var/run/dpdk/spdk_pid134567 00:43:21.513 Removing: /var/run/dpdk/spdk_pid137135 00:43:21.513 Removing: /var/run/dpdk/spdk_pid137361 00:43:21.513 Removing: /var/run/dpdk/spdk_pid140509 00:43:21.513 Removing: /var/run/dpdk/spdk_pid144253 00:43:21.513 Removing: /var/run/dpdk/spdk_pid146397 00:43:21.513 Removing: /var/run/dpdk/spdk_pid152776 00:43:21.513 Removing: /var/run/dpdk/spdk_pid158057 00:43:21.513 Removing: /var/run/dpdk/spdk_pid159345 00:43:21.513 Removing: /var/run/dpdk/spdk_pid160017 00:43:21.513 Removing: /var/run/dpdk/spdk_pid170278 00:43:21.513 Removing: /var/run/dpdk/spdk_pid172571 00:43:21.513 Removing: /var/run/dpdk/spdk_pid227536 00:43:21.513 Removing: /var/run/dpdk/spdk_pid230733 00:43:21.513 Removing: /var/run/dpdk/spdk_pid235167 00:43:21.513 Removing: /var/run/dpdk/spdk_pid239561 00:43:21.513 Removing: /var/run/dpdk/spdk_pid239569 00:43:21.513 Removing: /var/run/dpdk/spdk_pid240221 00:43:21.513 Removing: /var/run/dpdk/spdk_pid240757 00:43:21.513 Removing: /var/run/dpdk/spdk_pid241412 00:43:21.513 Removing: /var/run/dpdk/spdk_pid241813 00:43:21.513 Removing: /var/run/dpdk/spdk_pid241815 00:43:21.513 Removing: /var/run/dpdk/spdk_pid242080 00:43:21.513 Removing: /var/run/dpdk/spdk_pid242213 00:43:21.513 Removing: /var/run/dpdk/spdk_pid242218 00:43:21.513 Removing: /var/run/dpdk/spdk_pid242870 00:43:21.513 Removing: /var/run/dpdk/spdk_pid243407 00:43:21.513 Removing: /var/run/dpdk/spdk_pid244071 00:43:21.513 Removing: /var/run/dpdk/spdk_pid244466 00:43:21.513 Removing: /var/run/dpdk/spdk_pid244469 00:43:21.513 Removing: /var/run/dpdk/spdk_pid244728 00:43:21.513 Removing: /var/run/dpdk/spdk_pid245659 00:43:21.513 Removing: /var/run/dpdk/spdk_pid246481 00:43:21.513 Removing: /var/run/dpdk/spdk_pid251702 00:43:21.513 Removing: /var/run/dpdk/spdk_pid280055 00:43:21.513 Removing: /var/run/dpdk/spdk_pid282966 00:43:21.513 Removing: /var/run/dpdk/spdk_pid284186 00:43:21.513 Removing: /var/run/dpdk/spdk_pid286078 00:43:21.513 Removing: /var/run/dpdk/spdk_pid286216 00:43:21.513 Removing: /var/run/dpdk/spdk_pid286357 00:43:21.513 Removing: /var/run/dpdk/spdk_pid286505 00:43:21.513 Removing: /var/run/dpdk/spdk_pid286989 00:43:21.513 Removing: /var/run/dpdk/spdk_pid288272 00:43:21.513 Removing: /var/run/dpdk/spdk_pid289124 00:43:21.513 Removing: /var/run/dpdk/spdk_pid289436 00:43:21.513 Removing: /var/run/dpdk/spdk_pid291048 00:43:21.513 Removing: /var/run/dpdk/spdk_pid291460 00:43:21.513 Removing: /var/run/dpdk/spdk_pid291909 00:43:21.513 Removing: /var/run/dpdk/spdk_pid294295 00:43:21.513 Removing: /var/run/dpdk/spdk_pid297715 00:43:21.513 Removing: /var/run/dpdk/spdk_pid297716 00:43:21.513 Removing: /var/run/dpdk/spdk_pid297717 00:43:21.513 Removing: /var/run/dpdk/spdk_pid299933 00:43:21.513 Removing: /var/run/dpdk/spdk_pid302137 00:43:21.513 Removing: /var/run/dpdk/spdk_pid305591 00:43:21.513 Removing: /var/run/dpdk/spdk_pid328860 00:43:21.513 Removing: /var/run/dpdk/spdk_pid331621 00:43:21.513 Removing: /var/run/dpdk/spdk_pid335401 00:43:21.772 Removing: /var/run/dpdk/spdk_pid336345 00:43:21.772 Removing: /var/run/dpdk/spdk_pid337327 00:43:21.772 Removing: /var/run/dpdk/spdk_pid338400 00:43:21.772 Removing: /var/run/dpdk/spdk_pid341165 00:43:21.772 Removing: /var/run/dpdk/spdk_pid343695 00:43:21.772 Removing: /var/run/dpdk/spdk_pid346097 00:43:21.772 Removing: /var/run/dpdk/spdk_pid350836 00:43:21.772 Removing: /var/run/dpdk/spdk_pid350844 00:43:21.772 Removing: /var/run/dpdk/spdk_pid353735 00:43:21.772 Removing: /var/run/dpdk/spdk_pid353877 00:43:21.772 Removing: /var/run/dpdk/spdk_pid354014 00:43:21.772 Removing: /var/run/dpdk/spdk_pid354398 00:43:21.772 Removing: /var/run/dpdk/spdk_pid354404 00:43:21.772 Removing: /var/run/dpdk/spdk_pid355479 00:43:21.772 Removing: /var/run/dpdk/spdk_pid356657 00:43:21.772 Removing: /var/run/dpdk/spdk_pid357838 00:43:21.772 Removing: /var/run/dpdk/spdk_pid359014 00:43:21.772 Removing: /var/run/dpdk/spdk_pid360192 00:43:21.772 Removing: /var/run/dpdk/spdk_pid361503 00:43:21.772 Removing: /var/run/dpdk/spdk_pid365319 00:43:21.772 Removing: /var/run/dpdk/spdk_pid365650 00:43:21.772 Removing: /var/run/dpdk/spdk_pid366953 00:43:21.772 Removing: /var/run/dpdk/spdk_pid367799 00:43:21.772 Removing: /var/run/dpdk/spdk_pid371513 00:43:21.772 Removing: /var/run/dpdk/spdk_pid373371 00:43:21.772 Removing: /var/run/dpdk/spdk_pid377282 00:43:21.772 Removing: /var/run/dpdk/spdk_pid380728 00:43:21.772 Removing: /var/run/dpdk/spdk_pid387204 00:43:21.772 Removing: /var/run/dpdk/spdk_pid391685 00:43:21.772 Removing: /var/run/dpdk/spdk_pid391705 00:43:21.772 Removing: /var/run/dpdk/spdk_pid404471 00:43:21.772 Removing: /var/run/dpdk/spdk_pid405003 00:43:21.772 Removing: /var/run/dpdk/spdk_pid405403 00:43:21.772 Removing: /var/run/dpdk/spdk_pid405812 00:43:21.772 Removing: /var/run/dpdk/spdk_pid406388 00:43:21.772 Removing: /var/run/dpdk/spdk_pid406799 00:43:21.772 Removing: /var/run/dpdk/spdk_pid407200 00:43:21.772 Removing: /var/run/dpdk/spdk_pid407661 00:43:21.772 Removing: /var/run/dpdk/spdk_pid410233 00:43:21.772 Removing: /var/run/dpdk/spdk_pid410397 00:43:21.772 Removing: /var/run/dpdk/spdk_pid414717 00:43:21.772 Removing: /var/run/dpdk/spdk_pid414842 00:43:21.772 Removing: /var/run/dpdk/spdk_pid418208 00:43:21.772 Removing: /var/run/dpdk/spdk_pid420808 00:43:21.772 Removing: /var/run/dpdk/spdk_pid427718 00:43:21.772 Removing: /var/run/dpdk/spdk_pid428123 00:43:21.772 Removing: /var/run/dpdk/spdk_pid430619 00:43:21.772 Removing: /var/run/dpdk/spdk_pid430773 00:43:21.772 Removing: /var/run/dpdk/spdk_pid433400 00:43:21.772 Removing: /var/run/dpdk/spdk_pid437089 00:43:21.772 Removing: /var/run/dpdk/spdk_pid439243 00:43:21.772 Removing: /var/run/dpdk/spdk_pid445607 00:43:21.772 Removing: /var/run/dpdk/spdk_pid451324 00:43:21.772 Removing: /var/run/dpdk/spdk_pid452615 00:43:21.772 Removing: /var/run/dpdk/spdk_pid453278 00:43:21.772 Removing: /var/run/dpdk/spdk_pid463455 00:43:21.772 Removing: /var/run/dpdk/spdk_pid465696 00:43:21.772 Removing: /var/run/dpdk/spdk_pid467702 00:43:21.772 Removing: /var/run/dpdk/spdk_pid472741 00:43:21.772 Removing: /var/run/dpdk/spdk_pid472750 00:43:21.772 Removing: /var/run/dpdk/spdk_pid475649 00:43:21.772 Removing: /var/run/dpdk/spdk_pid477040 00:43:21.772 Removing: /var/run/dpdk/spdk_pid478386 00:43:21.772 Removing: /var/run/dpdk/spdk_pid479214 00:43:21.772 Removing: /var/run/dpdk/spdk_pid481205 00:43:21.772 Removing: /var/run/dpdk/spdk_pid482078 00:43:21.772 Removing: /var/run/dpdk/spdk_pid487373 00:43:21.772 Removing: /var/run/dpdk/spdk_pid487745 00:43:21.772 Removing: /var/run/dpdk/spdk_pid488136 00:43:21.772 Removing: /var/run/dpdk/spdk_pid489690 00:43:21.772 Removing: /var/run/dpdk/spdk_pid489982 00:43:21.772 Removing: /var/run/dpdk/spdk_pid490370 00:43:21.772 Removing: /var/run/dpdk/spdk_pid492815 00:43:21.772 Removing: /var/run/dpdk/spdk_pid492828 00:43:21.772 Removing: /var/run/dpdk/spdk_pid494302 00:43:21.772 Removing: /var/run/dpdk/spdk_pid494779 00:43:21.772 Removing: /var/run/dpdk/spdk_pid494790 00:43:21.772 Clean 00:43:21.772 01:10:37 -- common/autotest_common.sh@1453 -- # return 0 00:43:21.772 01:10:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:21.772 01:10:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:21.772 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:43:22.032 01:10:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:22.032 01:10:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:22.032 01:10:37 -- common/autotest_common.sh@10 -- # set +x 00:43:22.032 01:10:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:22.032 01:10:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:22.032 01:10:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:22.032 01:10:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:22.032 01:10:37 -- spdk/autotest.sh@398 -- # hostname 00:43:22.032 01:10:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:22.032 geninfo: WARNING: invalid characters removed from testname! 00:43:54.109 01:11:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:58.307 01:11:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:00.855 01:11:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:04.164 01:11:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:07.458 01:11:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:09.996 01:11:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:13.292 01:11:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:13.292 01:11:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:13.292 01:11:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:13.292 01:11:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:13.292 01:11:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:13.292 01:11:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:13.292 + [[ -n 16263 ]] 00:44:13.292 + sudo kill 16263 00:44:13.304 [Pipeline] } 00:44:13.319 [Pipeline] // stage 00:44:13.324 [Pipeline] } 00:44:13.338 [Pipeline] // timeout 00:44:13.344 [Pipeline] } 00:44:13.358 [Pipeline] // catchError 00:44:13.363 [Pipeline] } 00:44:13.378 [Pipeline] // wrap 00:44:13.384 [Pipeline] } 00:44:13.397 [Pipeline] // catchError 00:44:13.407 [Pipeline] stage 00:44:13.409 [Pipeline] { (Epilogue) 00:44:13.422 [Pipeline] catchError 00:44:13.424 [Pipeline] { 00:44:13.447 [Pipeline] echo 00:44:13.449 Cleanup processes 00:44:13.455 [Pipeline] sh 00:44:13.748 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:13.748 507162 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:13.764 [Pipeline] sh 00:44:14.052 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:14.052 ++ grep -v 'sudo pgrep' 00:44:14.052 ++ awk '{print $1}' 00:44:14.052 + sudo kill -9 00:44:14.052 + true 00:44:14.066 [Pipeline] sh 00:44:14.355 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:26.566 [Pipeline] sh 00:44:26.857 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:26.857 Artifacts sizes are good 00:44:26.876 [Pipeline] archiveArtifacts 00:44:26.886 Archiving artifacts 00:44:27.493 [Pipeline] sh 00:44:27.778 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:27.799 [Pipeline] cleanWs 00:44:27.813 [WS-CLEANUP] Deleting project workspace... 00:44:27.813 [WS-CLEANUP] Deferred wipeout is used... 00:44:27.820 [WS-CLEANUP] done 00:44:27.822 [Pipeline] } 00:44:27.842 [Pipeline] // catchError 00:44:27.858 [Pipeline] sh 00:44:28.166 + logger -p user.info -t JENKINS-CI 00:44:28.176 [Pipeline] } 00:44:28.193 [Pipeline] // stage 00:44:28.198 [Pipeline] } 00:44:28.216 [Pipeline] // node 00:44:28.223 [Pipeline] End of Pipeline 00:44:28.260 Finished: SUCCESS